/brz/remove-bazaar

To get this branch, use:
bzr branch http://gegoxaren.bato24.eu/bzr/brz/remove-bazaar
4110.2.2 by Martin Pool
Remove obsolete comments
1
# Copyright (C) 2005, 2006, 2008, 2009 Canonical Ltd
1887.1.1 by Adeodato Simó
Do not separate paragraphs in the copyright statement with blank lines,
2
#
974.1.27 by aaron.bentley at utoronto
Initial greedy fetch work
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
1887.1.1 by Adeodato Simó
Do not separate paragraphs in the copyright statement with blank lines,
7
#
974.1.27 by aaron.bentley at utoronto
Initial greedy fetch work
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
11
# GNU General Public License for more details.
1887.1.1 by Adeodato Simó
Do not separate paragraphs in the copyright statement with blank lines,
12
#
974.1.27 by aaron.bentley at utoronto
Initial greedy fetch work
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
4183.7.1 by Sabin Iacob
update FSF mailing address
15
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
1218 by Martin Pool
- fix up import
16
1231 by Martin Pool
- more progress on fetch on top of weaves
17
18
"""Copying of history from one branch to another.
19
20
The basic plan is that every branch knows the history of everything
21
that has merged into it.  As the first step of a merge, pull, or
22
branch operation we copy history from the source into the destination
23
branch.
24
"""
25
3350.6.4 by Robert Collins
First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.
26
import operator
27
1534.1.31 by Robert Collins
Deprecated fetch.fetch and fetch.greedy_fetch for branch.fetch, and move the Repository.fetch internals to InterRepo and InterWeaveRepo.
28
import bzrlib
4110.2.4 by Martin Pool
Deprecate passing a pb in to RepoFetcher
29
from bzrlib import (
30
    errors,
31
    symbol_versioning,
32
    )
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
33
from bzrlib.revision import NULL_REVISION
3350.6.4 by Robert Collins
First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.
34
from bzrlib.tsort import topo_sort
2094.3.5 by John Arbash Meinel
Fix imports to ensure modules are loaded before they are used
35
from bzrlib.trace import mutter
36
import bzrlib.ui
4060.1.4 by Robert Collins
Streaming fetch from remote servers.
37
from bzrlib.versionedfile import FulltextContentFactory
1534.1.31 by Robert Collins
Deprecated fetch.fetch and fetch.greedy_fetch for branch.fetch, and move the Repository.fetch internals to InterRepo and InterWeaveRepo.
38
1238 by Martin Pool
- remove a lot of dead code from fetch
39
1534.4.41 by Robert Collins
Branch now uses BzrDir reasonably sanely.
40
class RepoFetcher(object):
41
    """Pull revisions and texts from one repository to another.
42
2592.4.5 by Martin Pool
Add Repository.base on all repositories.
43
    This should not be used directly, it's essential a object to encapsulate
1534.1.33 by Robert Collins
Move copy_content_into into InterRepository and InterWeaveRepo, and disable the default codepath test as we have optimised paths for all current combinations.
44
    the logic in InterRepository.fetch().
1260 by Martin Pool
- some updates for fetch/update function
45
    """
3172.4.1 by Robert Collins
* Fetching via bzr+ssh will no longer fill ghosts by default (this is
46
4070.9.2 by Andrew Bennetts
Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.
47
    def __init__(self, to_repository, from_repository, last_revision=None,
48
        pb=None, find_ghosts=True, fetch_spec=None):
3172.4.1 by Robert Collins
* Fetching via bzr+ssh will no longer fill ghosts by default (this is
49
        """Create a repo fetcher.
50
4110.2.2 by Martin Pool
Remove obsolete comments
51
        :param last_revision: If set, try to limit to the data this revision
52
            references.
3172.4.1 by Robert Collins
* Fetching via bzr+ssh will no longer fill ghosts by default (this is
53
        :param find_ghosts: If True search the entire history for ghosts.
4110.2.6 by Martin Pool
Remove more progressbar cruft from fetch
54
        :param pb: ProgressBar object to use; deprecated and ignored.
55
            This method will just create one on top of the stack.
3172.4.1 by Robert Collins
* Fetching via bzr+ssh will no longer fill ghosts by default (this is
56
        """
4110.2.6 by Martin Pool
Remove more progressbar cruft from fetch
57
        if pb is not None:
58
            symbol_versioning.warn(
59
                symbol_versioning.deprecated_in((1, 14, 0))
60
                % "pb parameter to RepoFetcher.__init__")
61
            # and for simplicity it is in fact ignored
2668.2.6 by Andrew Bennetts
Merge repository-equality.
62
        if to_repository.has_same_location(from_repository):
2592.3.115 by Robert Collins
Move same repository check up to Repository.fetch to allow all fetch implementations to benefit.
63
            # repository.fetch should be taking care of this case.
2592.4.5 by Martin Pool
Add Repository.base on all repositories.
64
            raise errors.BzrError('RepoFetcher run '
65
                    'between two objects at the same location: '
2592.4.4 by Martin Pool
better message for attempted fetch between aliased repositories
66
                    '%r and %r' % (to_repository, from_repository))
1534.4.41 by Robert Collins
Branch now uses BzrDir reasonably sanely.
67
        self.to_repository = to_repository
68
        self.from_repository = from_repository
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
69
        self.sink = to_repository._get_sink()
1534.4.41 by Robert Collins
Branch now uses BzrDir reasonably sanely.
70
        # must not mutate self._last_revision as its potentially a shared instance
1185.65.27 by Robert Collins
Tweak storage towards mergability.
71
        self._last_revision = last_revision
4070.9.2 by Andrew Bennetts
Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.
72
        self._fetch_spec = fetch_spec
3172.4.1 by Robert Collins
* Fetching via bzr+ssh will no longer fill ghosts by default (this is
73
        self.find_ghosts = find_ghosts
1534.4.41 by Robert Collins
Branch now uses BzrDir reasonably sanely.
74
        self.from_repository.lock_read()
4110.2.22 by Martin Pool
Re-add mutter calls during fetch
75
        mutter("Using fetch logic to copy between %s(%s) and %s(%s)",
76
               self.from_repository, self.from_repository._format,
77
               self.to_repository, self.to_repository._format)
3842.3.5 by Andrew Bennetts
Remove some debugging cruft, make more tests pass.
78
        try:
4110.2.3 by Martin Pool
Remove redundant variable from fetch.
79
            self.__fetch()
3842.3.5 by Andrew Bennetts
Remove some debugging cruft, make more tests pass.
80
        finally:
81
            self.from_repository.unlock()
1185.65.27 by Robert Collins
Tweak storage towards mergability.
82
83
    def __fetch(self):
84
        """Primary worker function.
85
3943.8.1 by Marius Kruger
remove all trailing whitespace from bzr source
86
        This initialises all the needed variables, and then fetches the
1185.65.27 by Robert Collins
Tweak storage towards mergability.
87
        requested revisions, finally clearing the progress bar.
88
        """
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
89
        # Roughly this is what we're aiming for fetch to become:
90
        #
91
        # missing = self.sink.insert_stream(self.source.get_stream(search))
92
        # if missing:
93
        #     missing = self.sink.insert_stream(self.source.get_items(missing))
94
        # assert not missing
1240 by Martin Pool
- clean up fetch code and add progress bar
95
        self.count_total = 0
1185.33.55 by Martin Pool
[patch] weave fetch optimizations (Goffredo Baroncelli)
96
        self.file_ids_names = {}
4110.2.9 by Martin Pool
Re-add very basic top-level pb for fetch
97
        pb = bzrlib.ui.ui_factory.nested_progress_bar()
4110.2.14 by Martin Pool
Small fetch progress tweaks
98
        pb.show_pct = pb.show_count = False
4110.2.9 by Martin Pool
Re-add very basic top-level pb for fetch
99
        try:
4110.2.14 by Martin Pool
Small fetch progress tweaks
100
            pb.update("Finding revisions", 0, 2)
4110.2.9 by Martin Pool
Re-add very basic top-level pb for fetch
101
            search = self._revids_to_fetch()
102
            if search is None:
103
                return
4110.2.14 by Martin Pool
Small fetch progress tweaks
104
            pb.update("Fetching revisions", 1, 2)
4110.2.9 by Martin Pool
Re-add very basic top-level pb for fetch
105
            self._fetch_everything_for_search(search)
106
        finally:
107
            pb.finished()
2535.3.6 by Andrew Bennetts
Move some "what repo data to fetch logic" from RepoFetcher to Repository.
108
4110.2.6 by Martin Pool
Remove more progressbar cruft from fetch
109
    def _fetch_everything_for_search(self, search):
2535.3.6 by Andrew Bennetts
Move some "what repo data to fetch logic" from RepoFetcher to Repository.
110
        """Fetch all data for the given set of revisions."""
2535.3.9 by Andrew Bennetts
More comments.
111
        # The first phase is "file".  We pass the progress bar for it directly
2668.2.8 by Andrew Bennetts
Rename get_data_to_fetch_for_revision_ids as item_keys_introduced_by.
112
        # into item_keys_introduced_by, which has more information about how
2535.3.9 by Andrew Bennetts
More comments.
113
        # that phase is progressing than we do.  Progress updates for the other
114
        # phases are taken care of in this function.
115
        # XXX: there should be a clear owner of the progress reporting.  Perhaps
2668.2.8 by Andrew Bennetts
Rename get_data_to_fetch_for_revision_ids as item_keys_introduced_by.
116
        # item_keys_introduced_by should have a richer API than it does at the
117
        # moment, so that it can feed the progress information back to this
2535.3.9 by Andrew Bennetts
More comments.
118
        # function?
4060.1.3 by Robert Collins
Implement the separate source component for fetch - repository.StreamSource.
119
        if (self.from_repository._format.rich_root_data and
120
            not self.to_repository._format.rich_root_data):
121
            raise errors.IncompatibleRepositories(
122
                self.from_repository, self.to_repository,
123
                "different rich-root support")
4110.2.6 by Martin Pool
Remove more progressbar cruft from fetch
124
        pb = bzrlib.ui.ui_factory.nested_progress_bar()
2535.3.7 by Andrew Bennetts
Remove now unused _fetch_weave_texts, make progress reporting closer to how it was before I refactored __fetch.
125
        try:
4110.2.12 by Martin Pool
Add more fetch progress
126
            pb.update("Get stream source")
4060.1.3 by Robert Collins
Implement the separate source component for fetch - repository.StreamSource.
127
            source = self.from_repository._get_source(
128
                self.to_repository._format)
129
            stream = source.get_stream(search)
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
130
            from_format = self.from_repository._format
4110.2.12 by Martin Pool
Add more fetch progress
131
            pb.update("Inserting stream")
4032.3.7 by Robert Collins
Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.
132
            resume_tokens, missing_keys = self.sink.insert_stream(
133
                stream, from_format, [])
4257.3.2 by Andrew Bennetts
Check during fetch if we are going to be missing data necessary to calculate altered fileids for stacked revisions.
134
            if self.to_repository._fallback_repositories:
4241.18.1 by Andrew Bennetts
Cherry pick stacking push fix from bzr.dev r4289, make a 'Changes from RC1 to Final' section in NEWS.
135
                missing_keys.update(
4257.4.12 by Andrew Bennetts
Move _parent_inventories helper to RepoFetcher.
136
                    self._parent_inventories(search.get_keys()))
4029.2.1 by Robert Collins
Support streaming push to stacked branches.
137
            if missing_keys:
4110.2.12 by Martin Pool
Add more fetch progress
138
                pb.update("Missing keys")
4060.1.3 by Robert Collins
Implement the separate source component for fetch - repository.StreamSource.
139
                stream = source.get_stream_for_missing_keys(missing_keys)
4110.2.12 by Martin Pool
Add more fetch progress
140
                pb.update("Inserting missing keys")
4032.3.7 by Robert Collins
Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.
141
                resume_tokens, missing_keys = self.sink.insert_stream(
142
                    stream, from_format, resume_tokens)
4029.2.1 by Robert Collins
Support streaming push to stacked branches.
143
            if missing_keys:
144
                raise AssertionError(
145
                    "second push failed to complete a fetch %r." % (
146
                        missing_keys,))
4032.3.7 by Robert Collins
Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.
147
            if resume_tokens:
148
                raise AssertionError(
149
                    "second push failed to commit the fetch %r." % (
150
                        resume_tokens,))
4110.2.12 by Martin Pool
Add more fetch progress
151
            pb.update("Finishing stream")
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
152
            self.sink.finished()
2535.3.7 by Andrew Bennetts
Remove now unused _fetch_weave_texts, make progress reporting closer to how it was before I refactored __fetch.
153
        finally:
4110.2.6 by Martin Pool
Remove more progressbar cruft from fetch
154
            pb.finished()
4029.2.1 by Robert Collins
Support streaming push to stacked branches.
155
1185.65.30 by Robert Collins
Merge integration.
156
    def _revids_to_fetch(self):
2535.3.7 by Andrew Bennetts
Remove now unused _fetch_weave_texts, make progress reporting closer to how it was before I refactored __fetch.
157
        """Determines the exact revisions needed from self.from_repository to
158
        install self._last_revision in self.to_repository.
159
160
        If no revisions need to be fetched, then this just returns None.
161
        """
4070.9.2 by Andrew Bennetts
Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.
162
        if self._fetch_spec is not None:
163
            return self._fetch_spec
4110.2.22 by Martin Pool
Re-add mutter calls during fetch
164
        mutter('fetch up to rev {%s}', self._last_revision)
1534.4.50 by Robert Collins
Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.
165
        if self._last_revision is NULL_REVISION:
166
            # explicit limit of no revisions needed
3184.1.9 by Robert Collins
* ``Repository.get_data_stream`` is now deprecated in favour of
167
            return None
4316.1.3 by Jonathan Lange
Don't bother wrapping the NoSuchRevision in an InstallFailed.
168
        return self.to_repository.search_missing_revision_ids(
169
            self.from_repository, self._last_revision,
170
            find_ghosts=self.find_ghosts)
1185.64.3 by Goffredo Baroncelli
This patch changes the fetch code. Before, the original code expanded every inventory and
171
4257.4.12 by Andrew Bennetts
Move _parent_inventories helper to RepoFetcher.
172
    def _parent_inventories(self, revision_ids):
173
        # Find all the parent revisions referenced by the stream, but
4257.4.13 by Andrew Bennetts
Tweak comment.
174
        # not present in the stream, and make sure we send their
4257.4.12 by Andrew Bennetts
Move _parent_inventories helper to RepoFetcher.
175
        # inventories.
176
        parent_maps = self.to_repository.get_parent_map(revision_ids)
177
        parents = set()
178
        map(parents.update, parent_maps.itervalues())
179
        parents.discard(NULL_REVISION)
180
        parents.difference_update(revision_ids)
181
        missing_keys = set(('inventories', rev_id) for rev_id in parents)
182
        return missing_keys
183
3565.3.3 by Robert Collins
* Fetching data between repositories that have the same model but no
184
1910.2.24 by Aaron Bentley
Got intra-repository fetch working between model1 and 2 for all types
185
class Inter1and2Helper(object):
1910.2.48 by Aaron Bentley
Update from review comments
186
    """Helper for operations that convert data from model 1 and 2
3943.8.1 by Marius Kruger
remove all trailing whitespace from bzr source
187
1910.2.48 by Aaron Bentley
Update from review comments
188
    This is for use by fetchers and converters.
189
    """
190
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
191
    def __init__(self, source):
1910.2.48 by Aaron Bentley
Update from review comments
192
        """Constructor.
193
194
        :param source: The repository data comes from
195
        """
196
        self.source = source
197
198
    def iter_rev_trees(self, revs):
199
        """Iterate through RevisionTrees efficiently.
200
201
        Additionally, the inventory's revision_id is set if unset.
202
203
        Trees are retrieved in batches of 100, and then yielded in the order
204
        they were requested.
205
206
        :param revs: A list of revision ids
207
        """
3172.4.4 by Robert Collins
Review feedback.
208
        # In case that revs is not a list.
209
        revs = list(revs)
1910.2.48 by Aaron Bentley
Update from review comments
210
        while revs:
211
            for tree in self.source.revision_trees(revs[:100]):
1910.2.44 by Aaron Bentley
Retrieve only 500 revision trees at once
212
                if tree.inventory.revision_id is None:
213
                    tree.inventory.revision_id = tree.get_revision_id()
214
                yield tree
1910.2.48 by Aaron Bentley
Update from review comments
215
            revs = revs[100:]
1910.2.44 by Aaron Bentley
Retrieve only 500 revision trees at once
216
3380.2.4 by Aaron Bentley
Updates from review
217
    def _find_root_ids(self, revs, parent_map, graph):
218
        revision_root = {}
3380.1.2 by Aaron Bentley
Improve handling ghosts and changing root_ids
219
        planned_versions = {}
1910.2.48 by Aaron Bentley
Update from review comments
220
        for tree in self.iter_rev_trees(revs):
1910.2.18 by Aaron Bentley
Implement creation of knits for tree roots
221
            revision_id = tree.inventory.root.revision
2946.3.3 by John Arbash Meinel
Prefer tree.get_root_id() as more explicit than tree.path2id('')
222
            root_id = tree.get_root_id()
3380.1.2 by Aaron Bentley
Improve handling ghosts and changing root_ids
223
            planned_versions.setdefault(root_id, []).append(revision_id)
3380.1.3 by Aaron Bentley
Fix model-change fetching with ghosts and when fetch is resumed
224
            revision_root[revision_id] = root_id
225
        # Find out which parents we don't already know root ids for
226
        parents = set()
227
        for revision_parents in parent_map.itervalues():
228
            parents.update(revision_parents)
229
        parents.difference_update(revision_root.keys() + [NULL_REVISION])
3380.2.7 by Aaron Bentley
Update docs
230
        # Limit to revisions present in the versionedfile
3380.1.3 by Aaron Bentley
Fix model-change fetching with ghosts and when fetch is resumed
231
        parents = graph.get_parent_map(parents).keys()
232
        for tree in self.iter_rev_trees(parents):
233
            root_id = tree.get_root_id()
234
            revision_root[tree.get_revision_id()] = root_id
3380.2.4 by Aaron Bentley
Updates from review
235
        return revision_root, planned_versions
236
237
    def generate_root_texts(self, revs):
238
        """Generate VersionedFiles for all root ids.
239
240
        :param revs: the revisions to include
241
        """
242
        graph = self.source.get_graph()
243
        parent_map = graph.get_parent_map(revs)
4476.3.4 by Andrew Bennetts
Network serialisation, and most tests passing with InterDifferingSerializer commented out.
244
        mutter('in generate_root_texts: parent_map=%r', parent_map)
3350.6.4 by Robert Collins
First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.
245
        rev_order = topo_sort(parent_map)
246
        rev_id_to_root_id, root_id_to_rev_ids = self._find_root_ids(
3380.2.4 by Aaron Bentley
Updates from review
247
            revs, parent_map, graph)
4476.3.4 by Andrew Bennetts
Network serialisation, and most tests passing with InterDifferingSerializer commented out.
248
        mutter('in generate_root_texts: rev_id_to_root_id=%r',
249
                rev_id_to_root_id)
3350.6.4 by Robert Collins
First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.
250
        root_id_order = [(rev_id_to_root_id[rev_id], rev_id) for rev_id in
251
            rev_order]
252
        # Guaranteed stable, this groups all the file id operations together
253
        # retaining topological order within the revisions of a file id.
254
        # File id splits and joins would invalidate this, but they don't exist
255
        # yet, and are unlikely to in non-rich-root environments anyway.
256
        root_id_order.sort(key=operator.itemgetter(0))
257
        # Create a record stream containing the roots to create.
258
        def yield_roots():
3350.6.7 by Robert Collins
Review feedback, making things more clear, adding documentation on what is used where.
259
            for key in root_id_order:
260
                root_id, rev_id = key
4476.3.6 by Andrew Bennetts
Refactor out duplicated get parent keys logic from Inter1and2Helper and InterDifferingSerializer.
261
                parent_keys = _parent_keys_for_root_version(
262
                    root_id, rev_id, rev_id_to_root_id, parent_map, graph,
263
                    self.source)
3350.6.4 by Robert Collins
First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.
264
                yield FulltextContentFactory(key, parent_keys, None, '')
4022.1.1 by Robert Collins
Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)
265
        return [('texts', yield_roots())]
4476.3.6 by Andrew Bennetts
Refactor out duplicated get parent keys logic from Inter1and2Helper and InterDifferingSerializer.
266
267
268
def _parent_keys_for_root_version(
269
    root_id, rev_id, rev_id_to_root_id_map, parent_map, graph, repo):
4476.3.7 by Andrew Bennetts
Improve docstring.
270
    """Get the parent keys for a given root id."""
4476.3.6 by Andrew Bennetts
Refactor out duplicated get parent keys logic from Inter1and2Helper and InterDifferingSerializer.
271
    # Include direct parents of the revision, but only if they used
272
    # the same root_id and are heads.
273
    rev_parents = parent_map[rev_id]
274
    #mutter('in yield_roots: key=%s rev_parents=%r', key, rev_parents)
275
    parent_ids = []
276
    for parent_id in rev_parents:
277
        if parent_id == NULL_REVISION:
278
            continue
279
        if parent_id not in rev_id_to_root_id_map:
280
            # We probably didn't read this revision, go spend the
281
            # extra effort to actually check
282
            try:
283
                tree = repo.revision_tree(parent_id)
284
            except errors.NoSuchRevision:
285
                # Ghost, fill out rev_id_to_root_id in case we
286
                # encounter this again.
287
                # But set parent_root_id to None since we don't
288
                # really know
289
                parent_root_id = None
290
            else:
291
                parent_root_id = tree.get_root_id()
292
            rev_id_to_root_id_map[parent_id] = None
293
        else:
294
            parent_root_id = rev_id_to_root_id_map[parent_id]
295
        if root_id == parent_root_id:
296
            # With stacking we _might_ want to refer to a non-local
297
            # revision, but this code path only applies when we
298
            # have the full content available, so ghosts really are
299
            # ghosts, not just the edge of local data.
300
            parent_ids.append(parent_id)
301
        else:
302
            # root_id may be in the parent anyway.
303
            try:
304
                tree = repo.revision_tree(parent_id)
305
            except errors.NoSuchRevision:
306
                # ghost, can't refer to it.
307
                pass
308
            else:
309
                try:
310
                    parent_ids.append(
311
                        tree.inventory[root_id].revision)
312
                except errors.NoSuchId:
313
                    # not in the tree
314
                    pass
315
    # Drop non-head parents
316
    heads = graph.heads(parent_ids)
317
    selected_ids = []
318
    for parent_id in parent_ids:
319
        if parent_id in heads and parent_id not in selected_ids:
320
            selected_ids.append(parent_id)
321
    mutter('in yield_roots: heads=%r selected_ids=%r',
322
        heads, selected_ids)
323
    parent_keys = [
324
        (root_id, parent_id) for parent_id in selected_ids]
325
    mutter('in yield_roots: parent_keys=%r', parent_keys)
326
    return parent_keys