bzr branch
http://gegoxaren.bato24.eu/bzr/brz/remove-bazaar
| 
4634.124.1
by Martin Pool
 Add warning about slow cross-format fetches for InterDifferingSerializer  | 
1  | 
# Copyright (C) 2005-2010 Canonical Ltd
 | 
| 
1887.1.1
by Adeodato Simó
 Do not separate paragraphs in the copyright statement with blank lines,  | 
2  | 
#
 | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
3  | 
# This program is free software; you can redistribute it and/or modify
 | 
4  | 
# it under the terms of the GNU General Public License as published by
 | 
|
5  | 
# the Free Software Foundation; either version 2 of the License, or
 | 
|
6  | 
# (at your option) any later version.
 | 
|
| 
1887.1.1
by Adeodato Simó
 Do not separate paragraphs in the copyright statement with blank lines,  | 
7  | 
#
 | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
8  | 
# This program is distributed in the hope that it will be useful,
 | 
9  | 
# but WITHOUT ANY WARRANTY; without even the implied warranty of
 | 
|
10  | 
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 | 
|
11  | 
# GNU General Public License for more details.
 | 
|
| 
1887.1.1
by Adeodato Simó
 Do not separate paragraphs in the copyright statement with blank lines,  | 
12  | 
#
 | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
13  | 
# You should have received a copy of the GNU General Public License
 | 
14  | 
# along with this program; if not, write to the Free Software
 | 
|
| 
4183.7.1
by Sabin Iacob
 update FSF mailing address  | 
15  | 
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
 | 
| 
1185.65.10
by Robert Collins
 Rename Controlfiles to LockableFiles.  | 
16  | 
|
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
17  | 
from bzrlib.lazy_import import lazy_import  | 
18  | 
lazy_import(globals(), """  | 
|
| 
4232.2.1
by Vincent Ladeuil
 Stop-gap fix for Repository.get_revision_xml.  | 
19  | 
import cStringIO
 | 
| 
1740.3.7
by Jelmer Vernooij
 Move committer, log, revprops, timestamp and timezone to CommitBuilder.  | 
20  | 
import re
 | 
21  | 
import time
 | 
|
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
22  | 
|
| 
1910.2.22
by Aaron Bentley
 Make commits preserve root entry data  | 
23  | 
from bzrlib import (
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
24  | 
    bzrdir,
 | 
25  | 
    check,
 | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
26  | 
    chk_map,
 | 
| 
4840.2.7
by Vincent Ladeuil
 Move the _warn_if_deprecated call from repo.__init__ to  | 
27  | 
    config,
 | 
| 
2745.1.1
by Robert Collins
 Add a number of -Devil checkpoints.  | 
28  | 
    debug,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
29  | 
    errors,
 | 
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
30  | 
    fetch as _mod_fetch,
 | 
| 
3882.6.23
by John Arbash Meinel
 Change the XMLSerializer.read_inventory_from_string api.  | 
31  | 
    fifo_cache,
 | 
| 
2116.4.1
by John Arbash Meinel
 Update file and revision id generators.  | 
32  | 
    generate_ids,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
33  | 
    gpg,
 | 
34  | 
    graph,
 | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
35  | 
    inventory,
 | 
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
36  | 
    inventory_delta,
 | 
| 
2163.2.1
by John Arbash Meinel
 Speed up the fileids_altered_by_revision_ids processing  | 
37  | 
    lazy_regex,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
38  | 
    lockable_files,
 | 
39  | 
    lockdir,
 | 
|
| 
2988.1.5
by Robert Collins
 Use a LRU cache when generating the text index to reduce inventory deserialisations.  | 
40  | 
    lru_cache,
 | 
| 
1910.2.22
by Aaron Bentley
 Make commits preserve root entry data  | 
41  | 
    osutils,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
42  | 
    revision as _mod_revision,
 | 
| 
4913.4.2
by Jelmer Vernooij
 Add Repository.get_known_graph_ancestry.  | 
43  | 
    static_tuple,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
44  | 
    symbol_versioning,
 | 
| 
4634.124.1
by Martin Pool
 Add warning about slow cross-format fetches for InterDifferingSerializer  | 
45  | 
    trace,
 | 
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
46  | 
    tsort,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
47  | 
    ui,
 | 
| 
3831.2.1
by Andrew Bennetts
 Quick hack to do batching in InterDifferingSerializer. Almost halves the HPSS round-trips fetching pack-0.92-subtree to 1.9-rich-root.  | 
48  | 
    versionedfile,
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
49  | 
    )
 | 
| 
2520.4.54
by Aaron Bentley
 Hang a create_bundle method off repository  | 
50  | 
from bzrlib.bundle import serializer
 | 
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
51  | 
from bzrlib.revisiontree import RevisionTree
 | 
52  | 
from bzrlib.store.versioned import VersionedFileStore
 | 
|
53  | 
from bzrlib.testament import Testament
 | 
|
54  | 
""")  | 
|
55  | 
||
| 
4634.85.10
by Andrew Bennetts
 Change test_unlock_in_write_group to expect a log_exception_quietly rather than a raise.  | 
56  | 
from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises  | 
| 
1563.2.12
by Robert Collins
 Checkpointing: created InterObject to factor out common inter object worker code, added InterVersionedFile and tests to allow making join work between any versionedfile.  | 
57  | 
from bzrlib.inter import InterObject  | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
58  | 
from bzrlib.inventory import (  | 
59  | 
Inventory,  | 
|
60  | 
InventoryDirectory,  | 
|
61  | 
ROOT_ID,  | 
|
62  | 
entry_factory,  | 
|
63  | 
    )
 | 
|
| 
4634.85.12
by Andrew Bennetts
 Merge lp:bzr.  | 
64  | 
from bzrlib.lock import _RelockDebugMixin  | 
| 
4032.3.1
by Robert Collins
 Add a BranchFormat.network_name() method as preparation for creating branches via RPC calls.  | 
65  | 
from bzrlib import registry  | 
| 
3825.4.1
by Andrew Bennetts
 Add suppress_errors to abort_write_group.  | 
66  | 
from bzrlib.trace import (  | 
67  | 
log_exception_quietly, note, mutter, mutter_callsite, warning)  | 
|
| 
1185.70.3
by Martin Pool
 Various updates to make storage branch mergeable:  | 
68  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
69  | 
|
| 
1904.2.5
by Martin Pool
 Fix format warning inside test suite and add test  | 
70  | 
# Old formats display a warning, but only once
 | 
71  | 
_deprecation_warning_done = False  | 
|
72  | 
||
73  | 
||
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
74  | 
class CommitBuilder(object):  | 
75  | 
"""Provides an interface to build up a commit.  | 
|
76  | 
||
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
77  | 
    This allows describing a tree to be committed without needing to
 | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
78  | 
    know the internals of the format of the repository.
 | 
79  | 
    """
 | 
|
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
80  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
81  | 
    # all clients should supply tree roots.
 | 
82  | 
record_root_entry = True  | 
|
| 
2825.5.2
by Robert Collins
 Review feedback, and fix pointless commits with nested trees to raise PointlessCommit appropriately.  | 
83  | 
    # the default CommitBuilder does not manage trees whose root is versioned.
 | 
84  | 
_versioned_root = False  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
85  | 
|
| 
2979.2.2
by Robert Collins
 Per-file graph heads detection during commit for pack repositories.  | 
86  | 
def __init__(self, repository, parents, config, timestamp=None,  | 
87  | 
timezone=None, committer=None, revprops=None,  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
88  | 
revision_id=None):  | 
89  | 
"""Initiate a CommitBuilder.  | 
|
90  | 
||
91  | 
        :param repository: Repository to commit to.
 | 
|
92  | 
        :param parents: Revision ids of the parents of the new revision.
 | 
|
93  | 
        :param config: Configuration to use.
 | 
|
94  | 
        :param timestamp: Optional timestamp recorded for commit.
 | 
|
95  | 
        :param timezone: Optional timezone for timestamp.
 | 
|
96  | 
        :param committer: Optional committer to set for commit.
 | 
|
97  | 
        :param revprops: Optional dictionary of revision properties.
 | 
|
98  | 
        :param revision_id: Optional revision id.
 | 
|
99  | 
        """
 | 
|
100  | 
self._config = config  | 
|
101  | 
||
102  | 
if committer is None:  | 
|
103  | 
self._committer = self._config.username()  | 
|
104  | 
else:  | 
|
105  | 
self._committer = committer  | 
|
106  | 
||
107  | 
self.new_inventory = Inventory(None)  | 
|
| 
2858.2.1
by Martin Pool
 Remove most calls to safe_file_id and safe_revision_id.  | 
108  | 
self._new_revision_id = revision_id  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
109  | 
self.parents = parents  | 
110  | 
self.repository = repository  | 
|
111  | 
||
112  | 
self._revprops = {}  | 
|
113  | 
if revprops is not None:  | 
|
| 
3831.1.1
by John Arbash Meinel
 Before allowing commit to succeed, verify the texts will be 'safe'.  | 
114  | 
self._validate_revprops(revprops)  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
115  | 
self._revprops.update(revprops)  | 
116  | 
||
117  | 
if timestamp is None:  | 
|
118  | 
timestamp = time.time()  | 
|
119  | 
        # Restrict resolution to 1ms
 | 
|
120  | 
self._timestamp = round(timestamp, 3)  | 
|
121  | 
||
122  | 
if timezone is None:  | 
|
123  | 
self._timezone = osutils.local_time_offset()  | 
|
124  | 
else:  | 
|
125  | 
self._timezone = int(timezone)  | 
|
126  | 
||
127  | 
self._generate_revision_if_needed()  | 
|
| 
2979.2.5
by Robert Collins
 Make CommitBuilder.heads be _heads as its internal to CommitBuilder only.  | 
128  | 
self.__heads = graph.HeadsCache(repository.get_graph()).heads  | 
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
129  | 
self._basis_delta = []  | 
130  | 
        # API compatibility, older code that used CommitBuilder did not call
 | 
|
131  | 
        # .record_delete(), which means the delta that is computed would not be
 | 
|
132  | 
        # valid. Callers that will call record_delete() should call
 | 
|
133  | 
        # .will_record_deletes() to indicate that.
 | 
|
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
134  | 
self._recording_deletes = False  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
135  | 
        # memo'd check for no-op commits.
 | 
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
136  | 
self._any_changes = False  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
137  | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
138  | 
def any_changes(self):  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
139  | 
"""Return True if any entries were changed.  | 
140  | 
        
 | 
|
141  | 
        This includes merge-only changes. It is the core for the --unchanged
 | 
|
142  | 
        detection in commit.
 | 
|
143  | 
||
144  | 
        :return: True if any changes have occured.
 | 
|
145  | 
        """
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
146  | 
return self._any_changes  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
147  | 
|
| 
3831.1.1
by John Arbash Meinel
 Before allowing commit to succeed, verify the texts will be 'safe'.  | 
148  | 
def _validate_unicode_text(self, text, context):  | 
149  | 
"""Verify things like commit messages don't have bogus characters."""  | 
|
150  | 
if '\r' in text:  | 
|
151  | 
raise ValueError('Invalid value for %s: %r' % (context, text))  | 
|
152  | 
||
153  | 
def _validate_revprops(self, revprops):  | 
|
154  | 
for key, value in revprops.iteritems():  | 
|
155  | 
            # We know that the XML serializers do not round trip '\r'
 | 
|
156  | 
            # correctly, so refuse to accept them
 | 
|
| 
3831.1.5
by John Arbash Meinel
 It seems we have some direct tests that don't use strings and expect a value error as well.  | 
157  | 
if not isinstance(value, basestring):  | 
158  | 
raise ValueError('revision property (%s) is not a valid'  | 
|
159  | 
' (unicode) string: %r' % (key, value))  | 
|
| 
3831.1.1
by John Arbash Meinel
 Before allowing commit to succeed, verify the texts will be 'safe'.  | 
160  | 
self._validate_unicode_text(value,  | 
161  | 
'revision property (%s)' % (key,))  | 
|
162  | 
||
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
163  | 
def commit(self, message):  | 
164  | 
"""Make the actual commit.  | 
|
165  | 
||
166  | 
        :return: The revision id of the recorded revision.
 | 
|
167  | 
        """
 | 
|
| 
3831.1.1
by John Arbash Meinel
 Before allowing commit to succeed, verify the texts will be 'safe'.  | 
168  | 
self._validate_unicode_text(message, 'commit message')  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
169  | 
rev = _mod_revision.Revision(  | 
170  | 
timestamp=self._timestamp,  | 
|
171  | 
timezone=self._timezone,  | 
|
172  | 
committer=self._committer,  | 
|
173  | 
message=message,  | 
|
174  | 
inventory_sha1=self.inv_sha1,  | 
|
175  | 
revision_id=self._new_revision_id,  | 
|
176  | 
properties=self._revprops)  | 
|
177  | 
rev.parent_ids = self.parents  | 
|
178  | 
self.repository.add_revision(self._new_revision_id, rev,  | 
|
179  | 
self.new_inventory, self._config)  | 
|
180  | 
self.repository.commit_write_group()  | 
|
181  | 
return self._new_revision_id  | 
|
182  | 
||
183  | 
def abort(self):  | 
|
184  | 
"""Abort the commit that is being built.  | 
|
185  | 
        """
 | 
|
186  | 
self.repository.abort_write_group()  | 
|
187  | 
||
188  | 
def revision_tree(self):  | 
|
189  | 
"""Return the tree that was just committed.  | 
|
190  | 
||
191  | 
        After calling commit() this can be called to get a RevisionTree
 | 
|
192  | 
        representing the newly committed tree. This is preferred to
 | 
|
193  | 
        calling Repository.revision_tree() because that may require
 | 
|
194  | 
        deserializing the inventory, while we already have a copy in
 | 
|
195  | 
        memory.
 | 
|
196  | 
        """
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
197  | 
if self.new_inventory is None:  | 
198  | 
self.new_inventory = self.repository.get_inventory(  | 
|
199  | 
self._new_revision_id)  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
200  | 
return RevisionTree(self.repository, self.new_inventory,  | 
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
201  | 
self._new_revision_id)  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
202  | 
|
203  | 
def finish_inventory(self):  | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
204  | 
"""Tell the builder that the inventory is finished.  | 
| 
3735.2.163
by John Arbash Meinel
 Merge bzr.dev 4187, and revert the change to fix refcycle issues.  | 
205  | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
206  | 
        :return: The inventory id in the repository, which can be used with
 | 
207  | 
            repository.get_inventory.
 | 
|
208  | 
        """
 | 
|
209  | 
if self.new_inventory is None:  | 
|
210  | 
            # an inventory delta was accumulated without creating a new
 | 
|
| 
3735.2.12
by Robert Collins
 Implement commit-via-deltas for split inventory repositories.  | 
211  | 
            # inventory.
 | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
212  | 
basis_id = self.basis_delta_revision  | 
| 
4789.27.4
by John Arbash Meinel
 Robert says that self.new_inventory shouldn't be set.  | 
213  | 
            # We ignore the 'inventory' returned by add_inventory_by_delta
 | 
214  | 
            # because self.new_inventory is used to hint to the rest of the
 | 
|
215  | 
            # system what code path was taken
 | 
|
216  | 
self.inv_sha1, _ = self.repository.add_inventory_by_delta(  | 
|
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
217  | 
basis_id, self._basis_delta, self._new_revision_id,  | 
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
218  | 
self.parents)  | 
| 
3735.2.12
by Robert Collins
 Implement commit-via-deltas for split inventory repositories.  | 
219  | 
else:  | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
220  | 
if self.new_inventory.root is None:  | 
221  | 
raise AssertionError('Root entry should be supplied to'  | 
|
222  | 
' record_entry_contents, as of bzr 0.10.')  | 
|
223  | 
self.new_inventory.add(InventoryDirectory(ROOT_ID, '', None))  | 
|
224  | 
self.new_inventory.revision_id = self._new_revision_id  | 
|
225  | 
self.inv_sha1 = self.repository.add_inventory(  | 
|
226  | 
self._new_revision_id,  | 
|
227  | 
self.new_inventory,  | 
|
228  | 
self.parents  | 
|
229  | 
                )
 | 
|
230  | 
return self._new_revision_id  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
231  | 
|
232  | 
def _gen_revision_id(self):  | 
|
233  | 
"""Return new revision-id."""  | 
|
234  | 
return generate_ids.gen_revision_id(self._config.username(),  | 
|
235  | 
self._timestamp)  | 
|
236  | 
||
237  | 
def _generate_revision_if_needed(self):  | 
|
238  | 
"""Create a revision id if None was supplied.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
239  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
240  | 
        If the repository can not support user-specified revision ids
 | 
241  | 
        they should override this function and raise CannotSetRevisionId
 | 
|
242  | 
        if _new_revision_id is not None.
 | 
|
243  | 
||
244  | 
        :raises: CannotSetRevisionId
 | 
|
245  | 
        """
 | 
|
246  | 
if self._new_revision_id is None:  | 
|
247  | 
self._new_revision_id = self._gen_revision_id()  | 
|
248  | 
self.random_revid = True  | 
|
249  | 
else:  | 
|
250  | 
self.random_revid = False  | 
|
251  | 
||
| 
2979.2.5
by Robert Collins
 Make CommitBuilder.heads be _heads as its internal to CommitBuilder only.  | 
252  | 
def _heads(self, file_id, revision_ids):  | 
| 
2979.2.1
by Robert Collins
 Make it possible for different commit builders to override heads().  | 
253  | 
"""Calculate the graph heads for revision_ids in the graph of file_id.  | 
254  | 
||
255  | 
        This can use either a per-file graph or a global revision graph as we
 | 
|
256  | 
        have an identity relationship between the two graphs.
 | 
|
257  | 
        """
 | 
|
| 
2979.2.5
by Robert Collins
 Make CommitBuilder.heads be _heads as its internal to CommitBuilder only.  | 
258  | 
return self.__heads(revision_ids)  | 
| 
2979.2.1
by Robert Collins
 Make it possible for different commit builders to override heads().  | 
259  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
260  | 
def _check_root(self, ie, parent_invs, tree):  | 
261  | 
"""Helper for record_entry_contents.  | 
|
262  | 
||
263  | 
        :param ie: An entry being added.
 | 
|
264  | 
        :param parent_invs: The inventories of the parent revisions of the
 | 
|
265  | 
            commit.
 | 
|
266  | 
        :param tree: The tree that is being committed.
 | 
|
267  | 
        """
 | 
|
| 
2871.1.2
by Robert Collins
 * ``CommitBuilder.record_entry_contents`` now requires the root entry of a  | 
268  | 
        # In this revision format, root entries have no knit or weave When
 | 
269  | 
        # serializing out to disk and back in root.revision is always
 | 
|
270  | 
        # _new_revision_id
 | 
|
271  | 
ie.revision = self._new_revision_id  | 
|
| 
2818.3.1
by Robert Collins
 Change CommitBuilder factory delegation to allow simple declaration.  | 
272  | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
273  | 
def _require_root_change(self, tree):  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
274  | 
"""Enforce an appropriate root object change.  | 
275  | 
||
276  | 
        This is called once when record_iter_changes is called, if and only if
 | 
|
277  | 
        the root was not in the delta calculated by record_iter_changes.
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
278  | 
|
279  | 
        :param tree: The tree which is being committed.
 | 
|
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
280  | 
        """
 | 
281  | 
        # NB: if there are no parents then this method is not called, so no
 | 
|
282  | 
        # need to guard on parents having length.
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
283  | 
entry = entry_factory['directory'](tree.path2id(''), '',  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
284  | 
None)  | 
285  | 
entry.revision = self._new_revision_id  | 
|
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
286  | 
self._basis_delta.append(('', '', entry.file_id, entry))  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
287  | 
|
| 
2871.1.4
by Robert Collins
 Merge bzr.dev.  | 
288  | 
def _get_delta(self, ie, basis_inv, path):  | 
289  | 
"""Get a delta against the basis inventory for ie."""  | 
|
290  | 
if ie.file_id not in basis_inv:  | 
|
291  | 
            # add
 | 
|
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
292  | 
result = (None, path, ie.file_id, ie)  | 
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
293  | 
self._basis_delta.append(result)  | 
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
294  | 
return result  | 
| 
2871.1.4
by Robert Collins
 Merge bzr.dev.  | 
295  | 
elif ie != basis_inv[ie.file_id]:  | 
296  | 
            # common but altered
 | 
|
297  | 
            # TODO: avoid tis id2path call.
 | 
|
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
298  | 
result = (basis_inv.id2path(ie.file_id), path, ie.file_id, ie)  | 
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
299  | 
self._basis_delta.append(result)  | 
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
300  | 
return result  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
301  | 
else:  | 
| 
2871.1.4
by Robert Collins
 Merge bzr.dev.  | 
302  | 
            # common, unaltered
 | 
303  | 
return None  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
304  | 
|
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
305  | 
def get_basis_delta(self):  | 
306  | 
"""Return the complete inventory delta versus the basis inventory.  | 
|
307  | 
||
308  | 
        This has been built up with the calls to record_delete and
 | 
|
309  | 
        record_entry_contents. The client must have already called
 | 
|
310  | 
        will_record_deletes() to indicate that they will be generating a
 | 
|
311  | 
        complete delta.
 | 
|
312  | 
||
313  | 
        :return: An inventory delta, suitable for use with apply_delta, or
 | 
|
314  | 
            Repository.add_inventory_by_delta, etc.
 | 
|
315  | 
        """
 | 
|
316  | 
if not self._recording_deletes:  | 
|
317  | 
raise AssertionError("recording deletes not activated.")  | 
|
318  | 
return self._basis_delta  | 
|
319  | 
||
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
320  | 
def record_delete(self, path, file_id):  | 
321  | 
"""Record that a delete occured against a basis tree.  | 
|
322  | 
||
323  | 
        This is an optional API - when used it adds items to the basis_delta
 | 
|
324  | 
        being accumulated by the commit builder. It cannot be called unless the
 | 
|
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
325  | 
        method will_record_deletes() has been called to inform the builder that
 | 
326  | 
        a delta is being supplied.
 | 
|
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
327  | 
|
328  | 
        :param path: The path of the thing deleted.
 | 
|
329  | 
        :param file_id: The file id that was deleted.
 | 
|
330  | 
        """
 | 
|
331  | 
if not self._recording_deletes:  | 
|
332  | 
raise AssertionError("recording deletes not activated.")  | 
|
| 
3879.2.5
by John Arbash Meinel
 Change record_delete() to return the delta.  | 
333  | 
delta = (path, None, file_id, None)  | 
334  | 
self._basis_delta.append(delta)  | 
|
| 
4183.5.5
by Robert Collins
 Enable record_iter_changes for cases where it can work.  | 
335  | 
self._any_changes = True  | 
| 
3879.2.5
by John Arbash Meinel
 Change record_delete() to return the delta.  | 
336  | 
return delta  | 
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
337  | 
|
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
338  | 
def will_record_deletes(self):  | 
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
339  | 
"""Tell the commit builder that deletes are being notified.  | 
340  | 
||
341  | 
        This enables the accumulation of an inventory delta; for the resulting
 | 
|
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
342  | 
        commit to be valid, deletes against the basis MUST be recorded via
 | 
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
343  | 
        builder.record_delete().
 | 
344  | 
        """
 | 
|
345  | 
self._recording_deletes = True  | 
|
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
346  | 
try:  | 
347  | 
basis_id = self.parents[0]  | 
|
348  | 
except IndexError:  | 
|
349  | 
basis_id = _mod_revision.NULL_REVISION  | 
|
350  | 
self.basis_delta_revision = basis_id  | 
|
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
351  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
352  | 
def record_entry_contents(self, ie, parent_invs, path, tree,  | 
353  | 
content_summary):  | 
|
354  | 
"""Record the content of ie from tree into the commit if needed.  | 
|
355  | 
||
356  | 
        Side effect: sets ie.revision when unchanged
 | 
|
357  | 
||
358  | 
        :param ie: An inventory entry present in the commit.
 | 
|
359  | 
        :param parent_invs: The inventories of the parent revisions of the
 | 
|
360  | 
            commit.
 | 
|
361  | 
        :param path: The path the entry is at in the tree.
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
362  | 
        :param tree: The tree which contains this entry and should be used to
 | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
363  | 
            obtain content.
 | 
364  | 
        :param content_summary: Summary data from the tree about the paths
 | 
|
365  | 
            content - stat, length, exec, sha/link target. This is only
 | 
|
366  | 
            accessed when the entry has a revision of None - that is when it is
 | 
|
367  | 
            a candidate to commit.
 | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
368  | 
        :return: A tuple (change_delta, version_recorded, fs_hash).
 | 
369  | 
            change_delta is an inventory_delta change for this entry against
 | 
|
370  | 
            the basis tree of the commit, or None if no change occured against
 | 
|
371  | 
            the basis tree.
 | 
|
| 
2871.1.3
by Robert Collins
 * The CommitBuilder method ``record_entry_contents`` now returns summary  | 
372  | 
            version_recorded is True if a new version of the entry has been
 | 
373  | 
            recorded. For instance, committing a merge where a file was only
 | 
|
374  | 
            changed on the other side will return (delta, False).
 | 
|
| 
3709.3.3
by Robert Collins
 NEWS for the record_entry_contents change.  | 
375  | 
            fs_hash is either None, or the hash details for the path (currently
 | 
376  | 
            a tuple of the contents sha1 and the statvalue returned by
 | 
|
377  | 
            tree.get_file_with_stat()).
 | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
378  | 
        """
 | 
379  | 
if self.new_inventory.root is None:  | 
|
| 
2871.1.2
by Robert Collins
 * ``CommitBuilder.record_entry_contents`` now requires the root entry of a  | 
380  | 
if ie.parent_id is not None:  | 
381  | 
raise errors.RootMissing()  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
382  | 
self._check_root(ie, parent_invs, tree)  | 
383  | 
if ie.revision is None:  | 
|
384  | 
kind = content_summary[0]  | 
|
385  | 
else:  | 
|
386  | 
            # ie is carried over from a prior commit
 | 
|
387  | 
kind = ie.kind  | 
|
388  | 
        # XXX: repository specific check for nested tree support goes here - if
 | 
|
389  | 
        # the repo doesn't want nested trees we skip it ?
 | 
|
390  | 
if (kind == 'tree-reference' and  | 
|
391  | 
not self.repository._format.supports_tree_reference):  | 
|
392  | 
            # mismatch between commit builder logic and repository:
 | 
|
393  | 
            # this needs the entry creation pushed down into the builder.
 | 
|
| 
2776.4.18
by Robert Collins
 Review feedback.  | 
394  | 
raise NotImplementedError('Missing repository subtree support.')  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
395  | 
self.new_inventory.add(ie)  | 
396  | 
||
| 
2871.1.3
by Robert Collins
 * The CommitBuilder method ``record_entry_contents`` now returns summary  | 
397  | 
        # TODO: slow, take it out of the inner loop.
 | 
398  | 
try:  | 
|
399  | 
basis_inv = parent_invs[0]  | 
|
400  | 
except IndexError:  | 
|
401  | 
basis_inv = Inventory(root_id=None)  | 
|
402  | 
||
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
403  | 
        # ie.revision is always None if the InventoryEntry is considered
 | 
| 
2776.4.13
by Robert Collins
 Merge bzr.dev.  | 
404  | 
        # for committing. We may record the previous parents revision if the
 | 
405  | 
        # content is actually unchanged against a sole head.
 | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
406  | 
if ie.revision is not None:  | 
| 
2903.2.5
by Martin Pool
 record_entry_contents should give back deltas for changed roots; clean it up a bit  | 
407  | 
if not self._versioned_root and path == '':  | 
| 
2871.1.3
by Robert Collins
 * The CommitBuilder method ``record_entry_contents`` now returns summary  | 
408  | 
                # repositories that do not version the root set the root's
 | 
| 
3775.2.2
by Robert Collins
 Teach CommitBuilder to accumulate inventory deltas.  | 
409  | 
                # revision to the new commit even when no change occurs (more
 | 
410  | 
                # specifically, they do not record a revision on the root; and
 | 
|
411  | 
                # the rev id is assigned to the root during deserialisation -
 | 
|
412  | 
                # this masks when a change may have occurred against the basis.
 | 
|
413  | 
                # To match this we always issue a delta, because the revision
 | 
|
414  | 
                # of the root will always be changing.
 | 
|
| 
2903.2.5
by Martin Pool
 record_entry_contents should give back deltas for changed roots; clean it up a bit  | 
415  | 
if ie.file_id in basis_inv:  | 
416  | 
delta = (basis_inv.id2path(ie.file_id), path,  | 
|
417  | 
ie.file_id, ie)  | 
|
418  | 
else:  | 
|
| 
2871.1.3
by Robert Collins
 * The CommitBuilder method ``record_entry_contents`` now returns summary  | 
419  | 
                    # add
 | 
420  | 
delta = (None, path, ie.file_id, ie)  | 
|
| 
3879.2.3
by John Arbash Meinel
 Hide the .basis_delta variable, and require callers to use .get_basis_delta()  | 
421  | 
self._basis_delta.append(delta)  | 
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
422  | 
return delta, False, None  | 
| 
2903.2.5
by Martin Pool
 record_entry_contents should give back deltas for changed roots; clean it up a bit  | 
423  | 
else:  | 
424  | 
                # we don't need to commit this, because the caller already
 | 
|
425  | 
                # determined that an existing revision of this file is
 | 
|
| 
3619.1.1
by Robert Collins
 Tighten up the handling of carried-over inventory entries.  | 
426  | 
                # appropriate. If its not being considered for committing then
 | 
427  | 
                # it and all its parents to the root must be unaltered so
 | 
|
428  | 
                # no-change against the basis.
 | 
|
429  | 
if ie.revision == self._new_revision_id:  | 
|
430  | 
raise AssertionError("Impossible situation, a skipped "  | 
|
| 
3619.1.2
by Robert Collins
 Review feedback.  | 
431  | 
"inventory entry (%r) claims to be modified in this "  | 
432  | 
"commit (%r).", (ie, self._new_revision_id))  | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
433  | 
return None, False, None  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
434  | 
        # XXX: Friction: parent_candidates should return a list not a dict
 | 
435  | 
        #      so that we don't have to walk the inventories again.
 | 
|
436  | 
parent_candiate_entries = ie.parent_candidates(parent_invs)  | 
|
| 
2979.2.5
by Robert Collins
 Make CommitBuilder.heads be _heads as its internal to CommitBuilder only.  | 
437  | 
head_set = self._heads(ie.file_id, parent_candiate_entries.keys())  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
438  | 
heads = []  | 
439  | 
for inv in parent_invs:  | 
|
440  | 
if ie.file_id in inv:  | 
|
441  | 
old_rev = inv[ie.file_id].revision  | 
|
442  | 
if old_rev in head_set:  | 
|
443  | 
heads.append(inv[ie.file_id].revision)  | 
|
444  | 
head_set.remove(inv[ie.file_id].revision)  | 
|
445  | 
||
446  | 
store = False  | 
|
447  | 
        # now we check to see if we need to write a new record to the
 | 
|
448  | 
        # file-graph.
 | 
|
449  | 
        # We write a new entry unless there is one head to the ancestors, and
 | 
|
450  | 
        # the kind-derived content is unchanged.
 | 
|
451  | 
||
452  | 
        # Cheapest check first: no ancestors, or more the one head in the
 | 
|
453  | 
        # ancestors, we write a new node.
 | 
|
454  | 
if len(heads) != 1:  | 
|
455  | 
store = True  | 
|
456  | 
if not store:  | 
|
457  | 
            # There is a single head, look it up for comparison
 | 
|
458  | 
parent_entry = parent_candiate_entries[heads[0]]  | 
|
459  | 
            # if the non-content specific data has changed, we'll be writing a
 | 
|
460  | 
            # node:
 | 
|
461  | 
if (parent_entry.parent_id != ie.parent_id or  | 
|
462  | 
parent_entry.name != ie.name):  | 
|
463  | 
store = True  | 
|
464  | 
        # now we need to do content specific checks:
 | 
|
465  | 
if not store:  | 
|
466  | 
            # if the kind changed the content obviously has
 | 
|
467  | 
if kind != parent_entry.kind:  | 
|
468  | 
store = True  | 
|
| 
3709.3.2
by Robert Collins
 Race-free stat-fingerprint updating during commit via a new method get_file_with_stat.  | 
469  | 
        # Stat cache fingerprint feedback for the caller - None as we usually
 | 
470  | 
        # don't generate one.
 | 
|
471  | 
fingerprint = None  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
472  | 
if kind == 'file':  | 
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
473  | 
if content_summary[2] is None:  | 
474  | 
raise ValueError("Files must not have executable = None")  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
475  | 
if not store:  | 
| 
4526.15.1
by John Arbash Meinel
 Don't trust the file content length to tell us if the content has really changed.  | 
476  | 
                # We can't trust a check of the file length because of content
 | 
477  | 
                # filtering...
 | 
|
478  | 
if (# if the exec bit has changed we have to store:  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
479  | 
parent_entry.executable != content_summary[2]):  | 
480  | 
store = True  | 
|
481  | 
elif parent_entry.text_sha1 == content_summary[3]:  | 
|
482  | 
                    # all meta and content is unchanged (using a hash cache
 | 
|
483  | 
                    # hit to check the sha)
 | 
|
484  | 
ie.revision = parent_entry.revision  | 
|
485  | 
ie.text_size = parent_entry.text_size  | 
|
486  | 
ie.text_sha1 = parent_entry.text_sha1  | 
|
487  | 
ie.executable = parent_entry.executable  | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
488  | 
return self._get_delta(ie, basis_inv, path), False, None  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
489  | 
else:  | 
490  | 
                    # Either there is only a hash change(no hash cache entry,
 | 
|
491  | 
                    # or same size content change), or there is no change on
 | 
|
492  | 
                    # this file at all.
 | 
|
| 
2776.4.19
by Robert Collins
 Final review tweaks.  | 
493  | 
                    # Provide the parent's hash to the store layer, so that the
 | 
494  | 
                    # content is unchanged we will not store a new node.
 | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
495  | 
nostore_sha = parent_entry.text_sha1  | 
496  | 
if store:  | 
|
| 
2776.4.18
by Robert Collins
 Review feedback.  | 
497  | 
                # We want to record a new node regardless of the presence or
 | 
498  | 
                # absence of a content change in the file.
 | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
499  | 
nostore_sha = None  | 
| 
2776.4.18
by Robert Collins
 Review feedback.  | 
500  | 
ie.executable = content_summary[2]  | 
| 
3709.3.2
by Robert Collins
 Race-free stat-fingerprint updating during commit via a new method get_file_with_stat.  | 
501  | 
file_obj, stat_value = tree.get_file_with_stat(ie.file_id, path)  | 
502  | 
try:  | 
|
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
503  | 
text = file_obj.read()  | 
| 
3709.3.2
by Robert Collins
 Race-free stat-fingerprint updating during commit via a new method get_file_with_stat.  | 
504  | 
finally:  | 
505  | 
file_obj.close()  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
506  | 
try:  | 
507  | 
ie.text_sha1, ie.text_size = self._add_text_to_weave(  | 
|
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
508  | 
ie.file_id, text, heads, nostore_sha)  | 
| 
3709.3.2
by Robert Collins
 Race-free stat-fingerprint updating during commit via a new method get_file_with_stat.  | 
509  | 
                # Let the caller know we generated a stat fingerprint.
 | 
510  | 
fingerprint = (ie.text_sha1, stat_value)  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
511  | 
except errors.ExistingContent:  | 
| 
2776.4.18
by Robert Collins
 Review feedback.  | 
512  | 
                # Turns out that the file content was unchanged, and we were
 | 
513  | 
                # only going to store a new node if it was changed. Carry over
 | 
|
514  | 
                # the entry.
 | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
515  | 
ie.revision = parent_entry.revision  | 
516  | 
ie.text_size = parent_entry.text_size  | 
|
517  | 
ie.text_sha1 = parent_entry.text_sha1  | 
|
518  | 
ie.executable = parent_entry.executable  | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
519  | 
return self._get_delta(ie, basis_inv, path), False, None  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
520  | 
elif kind == 'directory':  | 
521  | 
if not store:  | 
|
522  | 
                # all data is meta here, nothing specific to directory, so
 | 
|
523  | 
                # carry over:
 | 
|
524  | 
ie.revision = parent_entry.revision  | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
525  | 
return self._get_delta(ie, basis_inv, path), False, None  | 
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
526  | 
self._add_text_to_weave(ie.file_id, '', heads, None)  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
527  | 
elif kind == 'symlink':  | 
528  | 
current_link_target = content_summary[3]  | 
|
529  | 
if not store:  | 
|
| 
2776.4.18
by Robert Collins
 Review feedback.  | 
530  | 
                # symlink target is not generic metadata, check if it has
 | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
531  | 
                # changed.
 | 
532  | 
if current_link_target != parent_entry.symlink_target:  | 
|
533  | 
store = True  | 
|
534  | 
if not store:  | 
|
535  | 
                # unchanged, carry over.
 | 
|
536  | 
ie.revision = parent_entry.revision  | 
|
537  | 
ie.symlink_target = parent_entry.symlink_target  | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
538  | 
return self._get_delta(ie, basis_inv, path), False, None  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
539  | 
ie.symlink_target = current_link_target  | 
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
540  | 
self._add_text_to_weave(ie.file_id, '', heads, None)  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
541  | 
elif kind == 'tree-reference':  | 
542  | 
if not store:  | 
|
543  | 
if content_summary[3] != parent_entry.reference_revision:  | 
|
544  | 
store = True  | 
|
545  | 
if not store:  | 
|
546  | 
                # unchanged, carry over.
 | 
|
547  | 
ie.reference_revision = parent_entry.reference_revision  | 
|
548  | 
ie.revision = parent_entry.revision  | 
|
| 
3709.3.1
by Robert Collins
 First cut - make it work - at updating the tree stat cache during commit.  | 
549  | 
return self._get_delta(ie, basis_inv, path), False, None  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
550  | 
ie.reference_revision = content_summary[3]  | 
| 
4595.11.7
by Martin Pool
 Add a sanity check to record_entry_contents  | 
551  | 
if ie.reference_revision is None:  | 
552  | 
raise AssertionError("invalid content_summary for nested tree: %r"  | 
|
553  | 
% (content_summary,))  | 
|
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
554  | 
self._add_text_to_weave(ie.file_id, '', heads, None)  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
555  | 
else:  | 
556  | 
raise NotImplementedError('unknown kind')  | 
|
557  | 
ie.revision = self._new_revision_id  | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
558  | 
self._any_changes = True  | 
| 
3709.3.2
by Robert Collins
 Race-free stat-fingerprint updating during commit via a new method get_file_with_stat.  | 
559  | 
return self._get_delta(ie, basis_inv, path), True, fingerprint  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
560  | 
|
| 
3775.2.30
by Robert Collins
 Remove the basis_tree parameter to record_iter_changes.  | 
561  | 
def record_iter_changes(self, tree, basis_revision_id, iter_changes,  | 
562  | 
_entry_factory=entry_factory):  | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
563  | 
"""Record a new tree via iter_changes.  | 
564  | 
||
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
565  | 
        :param tree: The tree to obtain text contents from for changed objects.
 | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
566  | 
        :param basis_revision_id: The revision id of the tree the iter_changes
 | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
567  | 
            has been generated against. Currently assumed to be the same
 | 
568  | 
            as self.parents[0] - if it is not, errors may occur.
 | 
|
569  | 
        :param iter_changes: An iter_changes iterator with the changes to apply
 | 
|
| 
4183.5.5
by Robert Collins
 Enable record_iter_changes for cases where it can work.  | 
570  | 
            to basis_revision_id. The iterator must not include any items with
 | 
571  | 
            a current kind of None - missing items must be either filtered out
 | 
|
572  | 
            or errored-on beefore record_iter_changes sees the item.
 | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
573  | 
        :param _entry_factory: Private method to bind entry_factory locally for
 | 
574  | 
            performance.
 | 
|
| 
4183.5.4
by Robert Collins
 Turn record_iter_changes into a generator to emit file system hashes.  | 
575  | 
        :return: A generator of (file_id, relpath, fs_hash) tuples for use with
 | 
576  | 
            tree._observed_sha1.
 | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
577  | 
        """
 | 
578  | 
        # Create an inventory delta based on deltas between all the parents and
 | 
|
579  | 
        # deltas between all the parent inventories. We use inventory delta's 
 | 
|
580  | 
        # between the inventory objects because iter_changes masks
 | 
|
581  | 
        # last-changed-field only changes.
 | 
|
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
582  | 
        # Working data:
 | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
583  | 
        # file_id -> change map, change is fileid, paths, changed, versioneds,
 | 
584  | 
        # parents, names, kinds, executables
 | 
|
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
585  | 
merged_ids = {}  | 
| 
3775.2.32
by Robert Collins
 Trivial review feedback.  | 
586  | 
        # {file_id -> revision_id -> inventory entry, for entries in parent
 | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
587  | 
        # trees that are not parents[0]
 | 
588  | 
parent_entries = {}  | 
|
| 
4183.5.5
by Robert Collins
 Enable record_iter_changes for cases where it can work.  | 
589  | 
ghost_basis = False  | 
590  | 
try:  | 
|
591  | 
revtrees = list(self.repository.revision_trees(self.parents))  | 
|
592  | 
except errors.NoSuchRevision:  | 
|
593  | 
            # one or more ghosts, slow path.
 | 
|
594  | 
revtrees = []  | 
|
595  | 
for revision_id in self.parents:  | 
|
596  | 
try:  | 
|
597  | 
revtrees.append(self.repository.revision_tree(revision_id))  | 
|
598  | 
except errors.NoSuchRevision:  | 
|
599  | 
if not revtrees:  | 
|
600  | 
basis_revision_id = _mod_revision.NULL_REVISION  | 
|
601  | 
ghost_basis = True  | 
|
602  | 
revtrees.append(self.repository.revision_tree(  | 
|
603  | 
_mod_revision.NULL_REVISION))  | 
|
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
604  | 
        # The basis inventory from a repository 
 | 
605  | 
if revtrees:  | 
|
606  | 
basis_inv = revtrees[0].inventory  | 
|
607  | 
else:  | 
|
608  | 
basis_inv = self.repository.revision_tree(  | 
|
609  | 
_mod_revision.NULL_REVISION).inventory  | 
|
| 
3775.2.32
by Robert Collins
 Trivial review feedback.  | 
610  | 
if len(self.parents) > 0:  | 
| 
4183.5.5
by Robert Collins
 Enable record_iter_changes for cases where it can work.  | 
611  | 
if basis_revision_id != self.parents[0] and not ghost_basis:  | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
612  | 
raise Exception(  | 
613  | 
"arbitrary basis parents not yet supported with merges")  | 
|
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
614  | 
for revtree in revtrees[1:]:  | 
| 
3775.2.32
by Robert Collins
 Trivial review feedback.  | 
615  | 
for change in revtree.inventory._make_delta(basis_inv):  | 
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
616  | 
if change[1] is None:  | 
| 
3775.2.32
by Robert Collins
 Trivial review feedback.  | 
617  | 
                        # Not present in this parent.
 | 
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
618  | 
                        continue
 | 
619  | 
if change[2] not in merged_ids:  | 
|
620  | 
if change[0] is not None:  | 
|
| 
4183.5.9
by Robert Collins
 Fix creating new revisions of files when merging.  | 
621  | 
basis_entry = basis_inv[change[2]]  | 
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
622  | 
merged_ids[change[2]] = [  | 
| 
4183.5.9
by Robert Collins
 Fix creating new revisions of files when merging.  | 
623  | 
                                # basis revid
 | 
624  | 
basis_entry.revision,  | 
|
625  | 
                                # new tree revid
 | 
|
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
626  | 
change[3].revision]  | 
| 
4183.5.9
by Robert Collins
 Fix creating new revisions of files when merging.  | 
627  | 
parent_entries[change[2]] = {  | 
628  | 
                                # basis parent
 | 
|
629  | 
basis_entry.revision:basis_entry,  | 
|
630  | 
                                # this parent 
 | 
|
631  | 
change[3].revision:change[3],  | 
|
632  | 
                                }
 | 
|
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
633  | 
else:  | 
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
634  | 
merged_ids[change[2]] = [change[3].revision]  | 
| 
4183.5.9
by Robert Collins
 Fix creating new revisions of files when merging.  | 
635  | 
parent_entries[change[2]] = {change[3].revision:change[3]}  | 
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
636  | 
else:  | 
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
637  | 
merged_ids[change[2]].append(change[3].revision)  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
638  | 
parent_entries[change[2]][change[3].revision] = change[3]  | 
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
639  | 
else:  | 
640  | 
merged_ids = {}  | 
|
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
641  | 
        # Setup the changes from the tree:
 | 
| 
3775.2.32
by Robert Collins
 Trivial review feedback.  | 
642  | 
        # changes maps file_id -> (change, [parent revision_ids])
 | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
643  | 
changes= {}  | 
644  | 
for change in iter_changes:  | 
|
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
645  | 
            # This probably looks up in basis_inv way to much.
 | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
646  | 
if change[1][0] is not None:  | 
647  | 
head_candidate = [basis_inv[change[0]].revision]  | 
|
648  | 
else:  | 
|
649  | 
head_candidate = []  | 
|
650  | 
changes[change[0]] = change, merged_ids.get(change[0],  | 
|
651  | 
head_candidate)  | 
|
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
652  | 
unchanged_merged = set(merged_ids) - set(changes)  | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
653  | 
        # Extend the changes dict with synthetic changes to record merges of
 | 
654  | 
        # texts.
 | 
|
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
655  | 
for file_id in unchanged_merged:  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
656  | 
            # Record a merged version of these items that did not change vs the
 | 
657  | 
            # basis. This can be either identical parallel changes, or a revert
 | 
|
658  | 
            # of a specific file after a merge. The recorded content will be
 | 
|
659  | 
            # that of the current tree (which is the same as the basis), but
 | 
|
660  | 
            # the per-file graph will reflect a merge.
 | 
|
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
661  | 
            # NB:XXX: We are reconstructing path information we had, this
 | 
662  | 
            # should be preserved instead.
 | 
|
663  | 
            # inv delta  change: (file_id, (path_in_source, path_in_target),
 | 
|
664  | 
            #   changed_content, versioned, parent, name, kind,
 | 
|
665  | 
            #   executable)
 | 
|
| 
4183.5.5
by Robert Collins
 Enable record_iter_changes for cases where it can work.  | 
666  | 
try:  | 
667  | 
basis_entry = basis_inv[file_id]  | 
|
668  | 
except errors.NoSuchId:  | 
|
669  | 
                # a change from basis->some_parents but file_id isn't in basis
 | 
|
670  | 
                # so was new in the merge, which means it must have changed
 | 
|
671  | 
                # from basis -> current, and as it hasn't the add was reverted
 | 
|
672  | 
                # by the user. So we discard this change.
 | 
|
673  | 
                pass
 | 
|
674  | 
else:  | 
|
675  | 
change = (file_id,  | 
|
676  | 
(basis_inv.id2path(file_id), tree.id2path(file_id)),  | 
|
677  | 
False, (True, True),  | 
|
678  | 
(basis_entry.parent_id, basis_entry.parent_id),  | 
|
679  | 
(basis_entry.name, basis_entry.name),  | 
|
680  | 
(basis_entry.kind, basis_entry.kind),  | 
|
681  | 
(basis_entry.executable, basis_entry.executable))  | 
|
682  | 
changes[file_id] = (change, merged_ids[file_id])  | 
|
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
683  | 
        # changes contains tuples with the change and a set of inventory
 | 
684  | 
        # candidates for the file.
 | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
685  | 
        # inv delta is:
 | 
686  | 
        # old_path, new_path, file_id, new_inventory_entry
 | 
|
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
687  | 
seen_root = False # Is the root in the basis delta?  | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
688  | 
inv_delta = self._basis_delta  | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
689  | 
modified_rev = self._new_revision_id  | 
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
690  | 
for change, head_candidates in changes.values():  | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
691  | 
if change[3][1]: # versioned in target.  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
692  | 
                # Several things may be happening here:
 | 
693  | 
                # We may have a fork in the per-file graph
 | 
|
694  | 
                #  - record a change with the content from tree
 | 
|
695  | 
                # We may have a change against < all trees  
 | 
|
696  | 
                #  - carry over the tree that hasn't changed
 | 
|
697  | 
                # We may have a change against all trees
 | 
|
698  | 
                #  - record the change with the content from tree
 | 
|
| 
3775.2.11
by Robert Collins
 CommitBuilder handles renamed directory and unmodified entries with single parents, for record_iter_changes.  | 
699  | 
kind = change[6][1]  | 
| 
3775.2.12
by Robert Collins
 CommitBuilder.record_iter_changes handles renamed files.  | 
700  | 
file_id = change[0]  | 
701  | 
entry = _entry_factory[kind](file_id, change[5][1],  | 
|
702  | 
change[4][1])  | 
|
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
703  | 
head_set = self._heads(change[0], set(head_candidates))  | 
704  | 
heads = []  | 
|
705  | 
                # Preserve ordering.
 | 
|
706  | 
for head_candidate in head_candidates:  | 
|
707  | 
if head_candidate in head_set:  | 
|
708  | 
heads.append(head_candidate)  | 
|
709  | 
head_set.remove(head_candidate)  | 
|
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
710  | 
carried_over = False  | 
| 
3775.2.33
by Robert Collins
 Fix bug with merges of new files, increasing test coverage to ensure its kept fixed.  | 
711  | 
if len(heads) == 1:  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
712  | 
                    # Could be a carry-over situation:
 | 
| 
3775.2.34
by Robert Collins
 Handle committing new files again.  | 
713  | 
parent_entry_revs = parent_entries.get(file_id, None)  | 
714  | 
if parent_entry_revs:  | 
|
715  | 
parent_entry = parent_entry_revs.get(heads[0], None)  | 
|
716  | 
else:  | 
|
717  | 
parent_entry = None  | 
|
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
718  | 
if parent_entry is None:  | 
719  | 
                        # The parent iter_changes was called against is the one
 | 
|
720  | 
                        # that is the per-file head, so any change is relevant
 | 
|
721  | 
                        # iter_changes is valid.
 | 
|
722  | 
carry_over_possible = False  | 
|
723  | 
else:  | 
|
724  | 
                        # could be a carry over situation
 | 
|
725  | 
                        # A change against the basis may just indicate a merge,
 | 
|
726  | 
                        # we need to check the content against the source of the
 | 
|
727  | 
                        # merge to determine if it was changed after the merge
 | 
|
728  | 
                        # or carried over.
 | 
|
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
729  | 
if (parent_entry.kind != entry.kind or  | 
730  | 
parent_entry.parent_id != entry.parent_id or  | 
|
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
731  | 
parent_entry.name != entry.name):  | 
732  | 
                            # Metadata common to all entries has changed
 | 
|
733  | 
                            # against per-file parent
 | 
|
734  | 
carry_over_possible = False  | 
|
735  | 
else:  | 
|
736  | 
carry_over_possible = True  | 
|
737  | 
                        # per-type checks for changes against the parent_entry
 | 
|
738  | 
                        # are done below.
 | 
|
739  | 
else:  | 
|
740  | 
                    # Cannot be a carry-over situation
 | 
|
741  | 
carry_over_possible = False  | 
|
742  | 
                # Populate the entry in the delta
 | 
|
743  | 
if kind == 'file':  | 
|
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
744  | 
                    # XXX: There is still a small race here: If someone reverts the content of a file
 | 
745  | 
                    # after iter_changes examines and decides it has changed,
 | 
|
746  | 
                    # we will unconditionally record a new version even if some
 | 
|
747  | 
                    # other process reverts it while commit is running (with
 | 
|
748  | 
                    # the revert happening after iter_changes did it's
 | 
|
749  | 
                    # examination).
 | 
|
750  | 
if change[7][1]:  | 
|
751  | 
entry.executable = True  | 
|
752  | 
else:  | 
|
753  | 
entry.executable = False  | 
|
| 
4398.8.1
by John Arbash Meinel
 Add a VersionedFile.add_text() api.  | 
754  | 
if (carry_over_possible and  | 
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
755  | 
parent_entry.executable == entry.executable):  | 
756  | 
                            # Check the file length, content hash after reading
 | 
|
757  | 
                            # the file.
 | 
|
758  | 
nostore_sha = parent_entry.text_sha1  | 
|
759  | 
else:  | 
|
760  | 
nostore_sha = None  | 
|
761  | 
file_obj, stat_value = tree.get_file_with_stat(file_id, change[1][1])  | 
|
762  | 
try:  | 
|
| 
4398.8.1
by John Arbash Meinel
 Add a VersionedFile.add_text() api.  | 
763  | 
text = file_obj.read()  | 
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
764  | 
finally:  | 
765  | 
file_obj.close()  | 
|
766  | 
try:  | 
|
767  | 
entry.text_sha1, entry.text_size = self._add_text_to_weave(  | 
|
| 
4398.8.1
by John Arbash Meinel
 Add a VersionedFile.add_text() api.  | 
768  | 
file_id, text, heads, nostore_sha)  | 
| 
4183.5.4
by Robert Collins
 Turn record_iter_changes into a generator to emit file system hashes.  | 
769  | 
yield file_id, change[1][1], (entry.text_sha1, stat_value)  | 
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
770  | 
except errors.ExistingContent:  | 
771  | 
                        # No content change against a carry_over parent
 | 
|
| 
4183.5.4
by Robert Collins
 Turn record_iter_changes into a generator to emit file system hashes.  | 
772  | 
                        # Perhaps this should also yield a fs hash update?
 | 
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
773  | 
carried_over = True  | 
774  | 
entry.text_size = parent_entry.text_size  | 
|
775  | 
entry.text_sha1 = parent_entry.text_sha1  | 
|
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
776  | 
elif kind == 'symlink':  | 
| 
3775.2.24
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch symlinks.  | 
777  | 
                    # Wants a path hint?
 | 
778  | 
entry.symlink_target = tree.get_symlink_target(file_id)  | 
|
779  | 
if (carry_over_possible and  | 
|
780  | 
parent_entry.symlink_target == entry.symlink_target):  | 
|
| 
4183.5.2
by Robert Collins
 Support tree-reference in record_iter_changes.  | 
781  | 
carried_over = True  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
782  | 
else:  | 
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
783  | 
self._add_text_to_weave(change[0], '', heads, None)  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
784  | 
elif kind == 'directory':  | 
785  | 
if carry_over_possible:  | 
|
786  | 
carried_over = True  | 
|
787  | 
else:  | 
|
| 
3775.2.19
by Robert Collins
 CommitBuilder.record_iter_changes handles merged directories.  | 
788  | 
                        # Nothing to set on the entry.
 | 
789  | 
                        # XXX: split into the Root and nonRoot versions.
 | 
|
790  | 
if change[1][1] != '' or self.repository.supports_rich_root():  | 
|
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
791  | 
self._add_text_to_weave(change[0], '', heads, None)  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
792  | 
elif kind == 'tree-reference':  | 
| 
4183.5.2
by Robert Collins
 Support tree-reference in record_iter_changes.  | 
793  | 
if not self.repository._format.supports_tree_reference:  | 
794  | 
                        # This isn't quite sane as an error, but we shouldn't
 | 
|
795  | 
                        # ever see this code path in practice: tree's don't
 | 
|
796  | 
                        # permit references when the repo doesn't support tree
 | 
|
797  | 
                        # references.
 | 
|
798  | 
raise errors.UnsupportedOperation(tree.add_reference,  | 
|
799  | 
self.repository)  | 
|
| 
4496.3.1
by Andrew Bennetts
 Fix undefined local and remove unused import in repository.py.  | 
800  | 
reference_revision = tree.get_reference_revision(change[0])  | 
801  | 
entry.reference_revision = reference_revision  | 
|
| 
4183.5.2
by Robert Collins
 Support tree-reference in record_iter_changes.  | 
802  | 
if (carry_over_possible and  | 
803  | 
parent_entry.reference_revision == reference_revision):  | 
|
804  | 
carried_over = True  | 
|
805  | 
else:  | 
|
| 
4398.8.5
by John Arbash Meinel
 Fix a few more cases where we were adding a list rather than an empty string.  | 
806  | 
self._add_text_to_weave(change[0], '', heads, None)  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
807  | 
else:  | 
| 
3775.2.27
by Robert Collins
 CommitBuilder.record_iter_changes handles files becoming directories and links.  | 
808  | 
raise AssertionError('unknown kind %r' % kind)  | 
| 
3775.2.22
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch directories.  | 
809  | 
if not carried_over:  | 
810  | 
entry.revision = modified_rev  | 
|
| 
3775.2.23
by Robert Collins
 CommitBuilder.record_iter_changes handles changed-in-branch files.  | 
811  | 
else:  | 
812  | 
entry.revision = parent_entry.revision  | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
813  | 
else:  | 
814  | 
entry = None  | 
|
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
815  | 
new_path = change[1][1]  | 
816  | 
inv_delta.append((change[1][0], new_path, change[0], entry))  | 
|
817  | 
if new_path == '':  | 
|
818  | 
seen_root = True  | 
|
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
819  | 
self.new_inventory = None  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
820  | 
if len(inv_delta):  | 
| 
4570.4.3
by Robert Collins
 Fix a couple of small bugs in the patch - use specific files with record_iter_changs, and the CLI shouldn't generate a filter of [] for commit.  | 
821  | 
            # This should perhaps be guarded by a check that the basis we
 | 
822  | 
            # commit against is the basis for the commit and if not do a delta
 | 
|
823  | 
            # against the basis.
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
824  | 
self._any_changes = True  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
825  | 
if not seen_root:  | 
826  | 
            # housekeeping root entry changes do not affect no-change commits.
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
827  | 
self._require_root_change(tree)  | 
| 
3775.2.29
by Robert Collins
 Updates to the form of add_inventory_by_delta that landed in trunk.  | 
828  | 
self.basis_delta_revision = basis_revision_id  | 
| 
3775.2.4
by Robert Collins
 Start on a CommitBuilder.record_iter_changes method.  | 
829  | 
|
| 
4398.8.1
by John Arbash Meinel
 Add a VersionedFile.add_text() api.  | 
830  | 
def _add_text_to_weave(self, file_id, new_text, parents, nostore_sha):  | 
| 
4398.8.6
by John Arbash Meinel
 Switch the api from VF.add_text to VF._add_text and trim some extra 'features'.  | 
831  | 
parent_keys = tuple([(file_id, parent) for parent in parents])  | 
832  | 
return self.repository.texts._add_text(  | 
|
| 
4398.8.1
by John Arbash Meinel
 Add a VersionedFile.add_text() api.  | 
833  | 
(file_id, self._new_revision_id), parent_keys, new_text,  | 
| 
4398.8.6
by John Arbash Meinel
 Switch the api from VF.add_text to VF._add_text and trim some extra 'features'.  | 
834  | 
nostore_sha=nostore_sha, random_id=self.random_revid)[0:2]  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
835  | 
|
836  | 
||
837  | 
class RootCommitBuilder(CommitBuilder):  | 
|
838  | 
"""This commitbuilder actually records the root id"""  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
839  | 
|
| 
2825.5.2
by Robert Collins
 Review feedback, and fix pointless commits with nested trees to raise PointlessCommit appropriately.  | 
840  | 
    # the root entry gets versioned properly by this builder.
 | 
| 
2840.1.1
by Ian Clatworthy
 faster pointless commit detection (Robert Collins)  | 
841  | 
_versioned_root = True  | 
| 
2825.5.2
by Robert Collins
 Review feedback, and fix pointless commits with nested trees to raise PointlessCommit appropriately.  | 
842  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
843  | 
def _check_root(self, ie, parent_invs, tree):  | 
844  | 
"""Helper for record_entry_contents.  | 
|
845  | 
||
846  | 
        :param ie: An entry being added.
 | 
|
847  | 
        :param parent_invs: The inventories of the parent revisions of the
 | 
|
848  | 
            commit.
 | 
|
849  | 
        :param tree: The tree that is being committed.
 | 
|
850  | 
        """
 | 
|
851  | 
||
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
852  | 
def _require_root_change(self, tree):  | 
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
853  | 
"""Enforce an appropriate root object change.  | 
854  | 
||
855  | 
        This is called once when record_iter_changes is called, if and only if
 | 
|
856  | 
        the root was not in the delta calculated by record_iter_changes.
 | 
|
| 
3775.2.9
by Robert Collins
 CommitBuilder handles deletes via record_iter_entries.  | 
857  | 
|
858  | 
        :param tree: The tree which is being committed.
 | 
|
| 
3775.2.7
by Robert Collins
 CommitBuilder handles no-change commits to roots properly with record_iter_changes.  | 
859  | 
        """
 | 
860  | 
        # versioned roots do not change unless the tree found a change.
 | 
|
861  | 
||
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
862  | 
|
| 
2220.2.3
by Martin Pool
 Add tag: revision namespace.  | 
863  | 
######################################################################
 | 
864  | 
# Repositories
 | 
|
865  | 
||
| 
4509.3.21
by Martin Pool
 Add new RepositoryBase class, shared by RemoteRepository  | 
866  | 
|
| 
5158.6.4
by Martin Pool
 Repository implements ControlComponent too  | 
867  | 
class Repository(_RelockDebugMixin, bzrdir.ControlComponent):  | 
| 
1185.70.3
by Martin Pool
 Various updates to make storage branch mergeable:  | 
868  | 
"""Repository holding history for one or more branches.  | 
869  | 
||
870  | 
    The repository holds and retrieves historical information including
 | 
|
871  | 
    revisions and file history.  It's normally accessed only by the Branch,
 | 
|
872  | 
    which views a particular line of development through that history.
 | 
|
873  | 
||
| 
3350.6.7
by Robert Collins
 Review feedback, making things more clear, adding documentation on what is used where.  | 
874  | 
    The Repository builds on top of some byte storage facilies (the revisions,
 | 
| 
3735.2.1
by Robert Collins
 Add the concept of CHK lookups to Repository.  | 
875  | 
    signatures, inventories, texts and chk_bytes attributes) and a Transport,
 | 
876  | 
    which respectively provide byte storage and a means to access the (possibly
 | 
|
| 
1185.70.3
by Martin Pool
 Various updates to make storage branch mergeable:  | 
877  | 
    remote) disk.
 | 
| 
3407.2.13
by Martin Pool
 Remove indirection through control_files to get transports  | 
878  | 
|
| 
3350.6.7
by Robert Collins
 Review feedback, making things more clear, adding documentation on what is used where.  | 
879  | 
    The byte storage facilities are addressed via tuples, which we refer to
 | 
880  | 
    as 'keys' throughout the code base. Revision_keys, inventory_keys and
 | 
|
881  | 
    signature_keys are all 1-tuples: (revision_id,). text_keys are two-tuples:
 | 
|
| 
3735.2.1
by Robert Collins
 Add the concept of CHK lookups to Repository.  | 
882  | 
    (file_id, revision_id). chk_bytes uses CHK keys - a 1-tuple with a single
 | 
| 
3735.2.99
by John Arbash Meinel
 Merge bzr.dev 4034. Whitespace cleanup  | 
883  | 
    byte string made up of a hash identifier and a hash value.
 | 
| 
3735.2.1
by Robert Collins
 Add the concept of CHK lookups to Repository.  | 
884  | 
    We use this interface because it allows low friction with the underlying
 | 
885  | 
    code that implements disk indices, network encoding and other parts of
 | 
|
886  | 
    bzrlib.
 | 
|
| 
3350.6.7
by Robert Collins
 Review feedback, making things more clear, adding documentation on what is used where.  | 
887  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
888  | 
    :ivar revisions: A bzrlib.versionedfile.VersionedFiles instance containing
 | 
889  | 
        the serialised revisions for the repository. This can be used to obtain
 | 
|
890  | 
        revision graph information or to access raw serialised revisions.
 | 
|
891  | 
        The result of trying to insert data into the repository via this store
 | 
|
892  | 
        is undefined: it should be considered read-only except for implementors
 | 
|
893  | 
        of repositories.
 | 
|
| 
3350.6.7
by Robert Collins
 Review feedback, making things more clear, adding documentation on what is used where.  | 
894  | 
    :ivar signatures: A bzrlib.versionedfile.VersionedFiles instance containing
 | 
895  | 
        the serialised signatures for the repository. This can be used to
 | 
|
896  | 
        obtain access to raw serialised signatures.  The result of trying to
 | 
|
897  | 
        insert data into the repository via this store is undefined: it should
 | 
|
898  | 
        be considered read-only except for implementors of repositories.
 | 
|
899  | 
    :ivar inventories: A bzrlib.versionedfile.VersionedFiles instance containing
 | 
|
900  | 
        the serialised inventories for the repository. This can be used to
 | 
|
901  | 
        obtain unserialised inventories.  The result of trying to insert data
 | 
|
902  | 
        into the repository via this store is undefined: it should be
 | 
|
903  | 
        considered read-only except for implementors of repositories.
 | 
|
904  | 
    :ivar texts: A bzrlib.versionedfile.VersionedFiles instance containing the
 | 
|
905  | 
        texts of files and directories for the repository. This can be used to
 | 
|
906  | 
        obtain file texts or file graphs. Note that Repository.iter_file_bytes
 | 
|
907  | 
        is usually a better interface for accessing file texts.
 | 
|
908  | 
        The result of trying to insert data into the repository via this store
 | 
|
909  | 
        is undefined: it should be considered read-only except for implementors
 | 
|
910  | 
        of repositories.
 | 
|
| 
4241.6.8
by Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil
 Add --development6-rich-root, disabling the legacy and unneeded development2 format, and activating the tests for CHK features disabled pending this format. (Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil)  | 
911  | 
    :ivar chk_bytes: A bzrlib.versionedfile.VersionedFiles instance containing
 | 
| 
3735.2.1
by Robert Collins
 Add the concept of CHK lookups to Repository.  | 
912  | 
        any data the repository chooses to store or have indexed by its hash.
 | 
913  | 
        The result of trying to insert data into the repository via this store
 | 
|
914  | 
        is undefined: it should be considered read-only except for implementors
 | 
|
915  | 
        of repositories.
 | 
|
| 
3407.2.13
by Martin Pool
 Remove indirection through control_files to get transports  | 
916  | 
    :ivar _transport: Transport for file access to repository, typically
 | 
917  | 
        pointing to .bzr/repository.
 | 
|
| 
1185.70.3
by Martin Pool
 Various updates to make storage branch mergeable:  | 
918  | 
    """
 | 
| 
1185.65.17
by Robert Collins
 Merge from integration, mode-changes are broken.  | 
919  | 
|
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
920  | 
    # What class to use for a CommitBuilder. Often its simpler to change this
 | 
921  | 
    # in a Repository class subclass rather than to override
 | 
|
922  | 
    # get_commit_builder.
 | 
|
923  | 
_commit_builder_class = CommitBuilder  | 
|
924  | 
    # The search regex used by xml based repositories to determine what things
 | 
|
925  | 
    # where changed in a single commit.
 | 
|
| 
2163.2.1
by John Arbash Meinel
 Speed up the fileids_altered_by_revision_ids processing  | 
926  | 
_file_ids_altered_regex = lazy_regex.lazy_compile(  | 
927  | 
r'file_id="(?P<file_id>[^"]+)"'  | 
|
| 
2776.4.6
by Robert Collins
 Fixup various commit test failures falling out from the other commit changes.  | 
928  | 
r'.* revision="(?P<revision_id>[^"]+)"'  | 
| 
2163.2.1
by John Arbash Meinel
 Speed up the fileids_altered_by_revision_ids processing  | 
929  | 
        )
 | 
930  | 
||
| 
3825.4.1
by Andrew Bennetts
 Add suppress_errors to abort_write_group.  | 
931  | 
def abort_write_group(self, suppress_errors=False):  | 
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
932  | 
"""Commit the contents accrued within the current write group.  | 
933  | 
||
| 
3825.4.6
by Andrew Bennetts
 Document the suppress_errors flag in the docstring.  | 
934  | 
        :param suppress_errors: if true, abort_write_group will catch and log
 | 
935  | 
            unexpected errors that happen during the abort, rather than
 | 
|
936  | 
            allowing them to propagate.  Defaults to False.
 | 
|
937  | 
||
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
938  | 
        :seealso: start_write_group.
 | 
939  | 
        """
 | 
|
940  | 
if self._write_group is not self.get_transaction():  | 
|
941  | 
            # has an unlock or relock occured ?
 | 
|
| 
4476.3.16
by Andrew Bennetts
 Only make inv deltas against bases we've already sent, and other tweaks.  | 
942  | 
if suppress_errors:  | 
943  | 
mutter(  | 
|
944  | 
'(suppressed) mismatched lock context and write group. %r, %r',  | 
|
945  | 
self._write_group, self.get_transaction())  | 
|
946  | 
                return
 | 
|
| 
3735.2.9
by Robert Collins
 Get a working chk_map using inventory implementation bootstrapped.  | 
947  | 
raise errors.BzrError(  | 
948  | 
'mismatched lock context and write group. %r, %r' %  | 
|
949  | 
(self._write_group, self.get_transaction()))  | 
|
| 
3825.4.1
by Andrew Bennetts
 Add suppress_errors to abort_write_group.  | 
950  | 
try:  | 
951  | 
self._abort_write_group()  | 
|
952  | 
except Exception, exc:  | 
|
953  | 
self._write_group = None  | 
|
954  | 
if not suppress_errors:  | 
|
955  | 
                raise
 | 
|
956  | 
mutter('abort_write_group failed')  | 
|
957  | 
log_exception_quietly()  | 
|
958  | 
note('bzr: ERROR (ignored): %s', exc)  | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
959  | 
self._write_group = None  | 
960  | 
||
961  | 
def _abort_write_group(self):  | 
|
962  | 
"""Template method for per-repository write group cleanup.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
963  | 
|
964  | 
        This is called during abort before the write group is considered to be
 | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
965  | 
        finished and should cleanup any internal state accrued during the write
 | 
966  | 
        group. There is no requirement that data handed to the repository be
 | 
|
967  | 
        *not* made available - this is not a rollback - but neither should any
 | 
|
968  | 
        attempt be made to ensure that data added is fully commited. Abort is
 | 
|
969  | 
        invoked when an error has occured so futher disk or network operations
 | 
|
970  | 
        may not be possible or may error and if possible should not be
 | 
|
971  | 
        attempted.
 | 
|
972  | 
        """
 | 
|
973  | 
||
| 
3221.12.1
by Robert Collins
 Backport development1 format (stackable packs) to before-shallow-branches.  | 
974  | 
def add_fallback_repository(self, repository):  | 
975  | 
"""Add a repository to use for looking up data not held locally.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
976  | 
|
| 
3221.12.1
by Robert Collins
 Backport development1 format (stackable packs) to before-shallow-branches.  | 
977  | 
        :param repository: A repository.
 | 
978  | 
        """
 | 
|
979  | 
if not self._format.supports_external_lookups:  | 
|
980  | 
raise errors.UnstackableRepositoryFormat(self._format, self.base)  | 
|
| 
4379.2.2
by John Arbash Meinel
 Change the Repository.add_fallback_repository() contract slightly.  | 
981  | 
if self.is_locked():  | 
982  | 
            # This repository will call fallback.unlock() when we transition to
 | 
|
983  | 
            # the unlocked state, so we make sure to increment the lock count
 | 
|
984  | 
repository.lock_read()  | 
|
| 
3582.1.7
by Martin Pool
 add_fallback_repository gives more detail on incompatibilities  | 
985  | 
self._check_fallback_repository(repository)  | 
| 
3221.12.1
by Robert Collins
 Backport development1 format (stackable packs) to before-shallow-branches.  | 
986  | 
self._fallback_repositories.append(repository)  | 
| 
3221.12.13
by Robert Collins
 Implement generic stacking rather than pack-internals based stacking.  | 
987  | 
self.texts.add_fallback_versioned_files(repository.texts)  | 
988  | 
self.inventories.add_fallback_versioned_files(repository.inventories)  | 
|
989  | 
self.revisions.add_fallback_versioned_files(repository.revisions)  | 
|
990  | 
self.signatures.add_fallback_versioned_files(repository.signatures)  | 
|
| 
3735.2.9
by Robert Collins
 Get a working chk_map using inventory implementation bootstrapped.  | 
991  | 
if self.chk_bytes is not None:  | 
992  | 
self.chk_bytes.add_fallback_versioned_files(repository.chk_bytes)  | 
|
| 
3221.12.1
by Robert Collins
 Backport development1 format (stackable packs) to before-shallow-branches.  | 
993  | 
|
| 
3582.1.7
by Martin Pool
 add_fallback_repository gives more detail on incompatibilities  | 
994  | 
def _check_fallback_repository(self, repository):  | 
| 
3221.12.4
by Robert Collins
 Implement basic repository supporting external references.  | 
995  | 
"""Check that this repository can fallback to repository safely.  | 
| 
3582.1.7
by Martin Pool
 add_fallback_repository gives more detail on incompatibilities  | 
996  | 
|
997  | 
        Raise an error if not.
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
998  | 
|
| 
3221.12.4
by Robert Collins
 Implement basic repository supporting external references.  | 
999  | 
        :param repository: A repository to fallback to.
 | 
1000  | 
        """
 | 
|
| 
3582.1.7
by Martin Pool
 add_fallback_repository gives more detail on incompatibilities  | 
1001  | 
return InterRepository._assert_same_model(self, repository)  | 
| 
3221.12.4
by Robert Collins
 Implement basic repository supporting external references.  | 
1002  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1003  | 
def add_inventory(self, revision_id, inv, parents):  | 
1004  | 
"""Add the inventory inv to the repository as revision_id.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1005  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1006  | 
        :param parents: The revision ids of the parents that revision_id
 | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1007  | 
                        is known to have and are in the repository already.
 | 
1008  | 
||
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
1009  | 
        :returns: The validator(which is a sha1 digest, though what is sha'd is
 | 
1010  | 
            repository format specific) of the serialized inventory.
 | 
|
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1011  | 
        """
 | 
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
1012  | 
if not self.is_in_write_group():  | 
1013  | 
raise AssertionError("%r not in write group" % (self,))  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1014  | 
_mod_revision.check_not_reserved_id(revision_id)  | 
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
1015  | 
if not (inv.revision_id is None or inv.revision_id == revision_id):  | 
1016  | 
raise AssertionError(  | 
|
1017  | 
                "Mismatch between inventory revision"
 | 
|
1018  | 
" id and insertion revid (%r, %r)"  | 
|
1019  | 
% (inv.revision_id, revision_id))  | 
|
1020  | 
if inv.root is None:  | 
|
1021  | 
raise AssertionError()  | 
|
| 
3735.2.9
by Robert Collins
 Get a working chk_map using inventory implementation bootstrapped.  | 
1022  | 
return self._add_inventory_checked(revision_id, inv, parents)  | 
1023  | 
||
1024  | 
def _add_inventory_checked(self, revision_id, inv, parents):  | 
|
1025  | 
"""Add inv to the repository after checking the inputs.  | 
|
1026  | 
||
1027  | 
        This function can be overridden to allow different inventory styles.
 | 
|
1028  | 
||
1029  | 
        :seealso: add_inventory, for the contract.
 | 
|
1030  | 
        """
 | 
|
| 
5035.2.4
by Jelmer Vernooij
 Use correct function to serialise an inventory to a sequence of lines.  | 
1031  | 
inv_lines = self._serializer.write_inventory_to_lines(inv)  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1032  | 
return self._inventory_add_lines(revision_id, parents,  | 
| 
2817.2.1
by Robert Collins
 * Inventory serialisation no longer double-sha's the content.  | 
1033  | 
inv_lines, check_content=False)  | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1034  | 
|
| 
3879.2.2
by John Arbash Meinel
 Rename add_inventory_delta to add_inventory_by_delta.  | 
1035  | 
def add_inventory_by_delta(self, basis_revision_id, delta, new_revision_id,  | 
| 
3735.2.121
by Ian Clatworthy
 add propagate_caches param to create_by_apply_delta, making fast-import 30% faster  | 
1036  | 
parents, basis_inv=None, propagate_caches=False):  | 
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1037  | 
"""Add a new inventory expressed as a delta against another revision.  | 
| 
3879.2.2
by John Arbash Meinel
 Rename add_inventory_delta to add_inventory_by_delta.  | 
1038  | 
|
| 
4501.1.1
by Robert Collins
 Add documentation describing how and why we use inventory deltas, and what can go wrong with them.  | 
1039  | 
        See the inventory developers documentation for the theory behind
 | 
1040  | 
        inventory deltas.
 | 
|
1041  | 
||
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1042  | 
        :param basis_revision_id: The inventory id the delta was created
 | 
| 
3879.2.2
by John Arbash Meinel
 Rename add_inventory_delta to add_inventory_by_delta.  | 
1043  | 
            against. (This does not have to be a direct parent.)
 | 
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1044  | 
        :param delta: The inventory delta (see Inventory.apply_delta for
 | 
1045  | 
            details).
 | 
|
1046  | 
        :param new_revision_id: The revision id that the inventory is being
 | 
|
1047  | 
            added for.
 | 
|
1048  | 
        :param parents: The revision ids of the parents that revision_id is
 | 
|
1049  | 
            known to have and are in the repository already. These are supplied
 | 
|
1050  | 
            for repositories that depend on the inventory graph for revision
 | 
|
1051  | 
            graph access, as well as for those that pun ancestry with delta
 | 
|
1052  | 
            compression.
 | 
|
| 
3735.2.120
by Ian Clatworthy
 allow a known basis inventory to be passed to Repository.add_inventory_by_delta()  | 
1053  | 
        :param basis_inv: The basis inventory if it is already known,
 | 
1054  | 
            otherwise None.
 | 
|
| 
3735.2.121
by Ian Clatworthy
 add propagate_caches param to create_by_apply_delta, making fast-import 30% faster  | 
1055  | 
        :param propagate_caches: If True, the caches for this inventory are
 | 
1056  | 
          copied to and updated for the result if possible.
 | 
|
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1057  | 
|
| 
3879.3.1
by John Arbash Meinel
 Change the return of add_inventory_by_delta to also return the Inventory.  | 
1058  | 
        :returns: (validator, new_inv)
 | 
1059  | 
            The validator(which is a sha1 digest, though what is sha'd is
 | 
|
1060  | 
            repository format specific) of the serialized inventory, and the
 | 
|
1061  | 
            resulting inventory.
 | 
|
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1062  | 
        """
 | 
1063  | 
if not self.is_in_write_group():  | 
|
1064  | 
raise AssertionError("%r not in write group" % (self,))  | 
|
1065  | 
_mod_revision.check_not_reserved_id(new_revision_id)  | 
|
1066  | 
basis_tree = self.revision_tree(basis_revision_id)  | 
|
1067  | 
basis_tree.lock_read()  | 
|
1068  | 
try:  | 
|
1069  | 
            # Note that this mutates the inventory of basis_tree, which not all
 | 
|
1070  | 
            # inventory implementations may support: A better idiom would be to
 | 
|
1071  | 
            # return a new inventory, but as there is no revision tree cache in
 | 
|
1072  | 
            # repository this is safe for now - RBC 20081013
 | 
|
| 
3735.2.120
by Ian Clatworthy
 allow a known basis inventory to be passed to Repository.add_inventory_by_delta()  | 
1073  | 
if basis_inv is None:  | 
1074  | 
basis_inv = basis_tree.inventory  | 
|
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1075  | 
basis_inv.apply_delta(delta)  | 
1076  | 
basis_inv.revision_id = new_revision_id  | 
|
| 
3879.3.1
by John Arbash Meinel
 Change the return of add_inventory_by_delta to also return the Inventory.  | 
1077  | 
return (self.add_inventory(new_revision_id, basis_inv, parents),  | 
| 
3735.2.59
by Jelmer Vernooij
 Make Repository.add_inventory_delta() return the resulting inventory.  | 
1078  | 
basis_inv)  | 
| 
3775.2.1
by Robert Collins
 Create bzrlib.repository.Repository.add_inventory_delta for adding inventories via deltas.  | 
1079  | 
finally:  | 
1080  | 
basis_tree.unlock()  | 
|
1081  | 
||
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1082  | 
def _inventory_add_lines(self, revision_id, parents, lines,  | 
| 
2805.6.7
by Robert Collins
 Review feedback.  | 
1083  | 
check_content=True):  | 
| 
2817.2.1
by Robert Collins
 * Inventory serialisation no longer double-sha's the content.  | 
1084  | 
"""Store lines in inv_vf and return the sha1 of the inventory."""  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1085  | 
parents = [(parent,) for parent in parents]  | 
| 
4476.3.53
by Andrew Bennetts
 Flush after adding an individual inventory, fixing more tests.  | 
1086  | 
result = self.inventories.add_lines((revision_id,), parents, lines,  | 
| 
2817.2.1
by Robert Collins
 * Inventory serialisation no longer double-sha's the content.  | 
1087  | 
check_content=check_content)[0]  | 
| 
4476.3.53
by Andrew Bennetts
 Flush after adding an individual inventory, fixing more tests.  | 
1088  | 
self.inventories._access.flush()  | 
1089  | 
return result  | 
|
| 
1740.3.6
by Jelmer Vernooij
 Move inventory writing to the commit builder.  | 
1090  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1091  | 
def add_revision(self, revision_id, rev, inv=None, config=None):  | 
1092  | 
"""Add rev to the revision store as revision_id.  | 
|
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1093  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1094  | 
        :param revision_id: the revision id to use.
 | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1095  | 
        :param rev: The revision object.
 | 
1096  | 
        :param inv: The inventory for the revision. if None, it will be looked
 | 
|
1097  | 
                    up in the inventory storer
 | 
|
1098  | 
        :param config: If None no digital signature will be created.
 | 
|
1099  | 
                       If supplied its signature_needed method will be used
 | 
|
1100  | 
                       to determine if a signature should be made.
 | 
|
1101  | 
        """
 | 
|
| 
2249.5.13
by John Arbash Meinel
 Finish auditing Repository, and fix generate_ids to always generate utf8 ids.  | 
1102  | 
        # TODO: jam 20070210 Shouldn't we check rev.revision_id and
 | 
1103  | 
        #       rev.parent_ids?
 | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1104  | 
_mod_revision.check_not_reserved_id(revision_id)  | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1105  | 
if config is not None and config.signature_needed():  | 
1106  | 
if inv is None:  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1107  | 
inv = self.get_inventory(revision_id)  | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1108  | 
plaintext = Testament(rev, inv).as_short_text()  | 
1109  | 
self.store_revision_signature(  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1110  | 
gpg.GPGStrategy(config), plaintext, revision_id)  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1111  | 
        # check inventory present
 | 
1112  | 
if not self.inventories.get_parent_map([(revision_id,)]):  | 
|
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1113  | 
if inv is None:  | 
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
1114  | 
raise errors.WeaveRevisionNotPresent(revision_id,  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1115  | 
self.inventories)  | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1116  | 
else:  | 
1117  | 
                # yes, this is not suitable for adding with ghosts.
 | 
|
| 
3380.1.6
by Aaron Bentley
 Ensure fetching munges sha1s  | 
1118  | 
rev.inventory_sha1 = self.add_inventory(revision_id, inv,  | 
| 
3305.1.1
by Jelmer Vernooij
 Make sure that specifying the inv= argument to add_revision() sets the  | 
1119  | 
rev.parent_ids)  | 
| 
3380.1.6
by Aaron Bentley
 Ensure fetching munges sha1s  | 
1120  | 
else:  | 
| 
3350.8.3
by Robert Collins
 VF.get_sha1s needed changing to be stackable.  | 
1121  | 
key = (revision_id,)  | 
1122  | 
rev.inventory_sha1 = self.inventories.get_sha1s([key])[key]  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1123  | 
self._add_revision(rev)  | 
| 
1570.1.2
by Robert Collins
 Import bzrtools' 'fix' command as 'bzr reconcile.'  | 
1124  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1125  | 
def _add_revision(self, revision):  | 
1126  | 
text = self._serializer.write_revision_to_string(revision)  | 
|
1127  | 
key = (revision.revision_id,)  | 
|
1128  | 
parents = tuple((parent,) for parent in revision.parent_ids)  | 
|
1129  | 
self.revisions.add_lines(key, parents, osutils.split_lines(text))  | 
|
| 
2520.4.10
by Aaron Bentley
 Enable installation of revisions  | 
1130  | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
1131  | 
def all_revision_ids(self):  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1132  | 
"""Returns a list of all the revision ids in the repository.  | 
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
1133  | 
|
| 
3221.12.1
by Robert Collins
 Backport development1 format (stackable packs) to before-shallow-branches.  | 
1134  | 
        This is conceptually deprecated because code should generally work on
 | 
1135  | 
        the graph reachable from a particular revision, and ignore any other
 | 
|
1136  | 
        revisions that might be present.  There is no direct replacement
 | 
|
1137  | 
        method.
 | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
1138  | 
        """
 | 
| 
2592.3.114
by Robert Collins
 More evil mutterings.  | 
1139  | 
if 'evil' in debug.debug_flags:  | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
1140  | 
mutter_callsite(2, "all_revision_ids is linear with history.")  | 
| 
3221.12.4
by Robert Collins
 Implement basic repository supporting external references.  | 
1141  | 
return self._all_revision_ids()  | 
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
1142  | 
|
1143  | 
def _all_revision_ids(self):  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1144  | 
"""Returns a list of all the revision ids in the repository.  | 
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1145  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1146  | 
        These are in as much topological order as the underlying store can
 | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
1147  | 
        present.
 | 
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1148  | 
        """
 | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
1149  | 
raise NotImplementedError(self._all_revision_ids)  | 
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1150  | 
|
| 
1687.1.7
by Robert Collins
 Teach Repository about break_lock.  | 
1151  | 
def break_lock(self):  | 
1152  | 
"""Break a lock if one is present from another instance.  | 
|
1153  | 
||
1154  | 
        Uses the ui factory to ask for confirmation if the lock may be from
 | 
|
1155  | 
        an active process.
 | 
|
1156  | 
        """
 | 
|
1157  | 
self.control_files.break_lock()  | 
|
1158  | 
||
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1159  | 
    @needs_read_lock
 | 
1160  | 
def _eliminate_revisions_not_present(self, revision_ids):  | 
|
1161  | 
"""Check every revision id in revision_ids to see if we have it.  | 
|
1162  | 
||
1163  | 
        Returns a set of the present revisions.
 | 
|
1164  | 
        """
 | 
|
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1165  | 
result = []  | 
| 
3369.2.1
by John Arbash Meinel
 Knit => knit fetching also has some very bad 'for x in revision_ids: has_revision_id()' calls  | 
1166  | 
graph = self.get_graph()  | 
1167  | 
parent_map = graph.get_parent_map(revision_ids)  | 
|
1168  | 
        # The old API returned a list, should this actually be a set?
 | 
|
1169  | 
return parent_map.keys()  | 
|
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1170  | 
|
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1171  | 
def _check_inventories(self, checker):  | 
1172  | 
"""Check the inventories found from the revision scan.  | 
|
1173  | 
        
 | 
|
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1174  | 
        This is responsible for verifying the sha1 of inventories and
 | 
1175  | 
        creating a pending_keys set that covers data referenced by inventories.
 | 
|
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1176  | 
        """
 | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1177  | 
bar = ui.ui_factory.nested_progress_bar()  | 
1178  | 
try:  | 
|
1179  | 
self._do_check_inventories(checker, bar)  | 
|
1180  | 
finally:  | 
|
1181  | 
bar.finished()  | 
|
1182  | 
||
1183  | 
def _do_check_inventories(self, checker, bar):  | 
|
1184  | 
"""Helper for _check_inventories."""  | 
|
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1185  | 
revno = 0  | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1186  | 
keys = {'chk_bytes':set(), 'inventories':set(), 'texts':set()}  | 
1187  | 
kinds = ['chk_bytes', 'texts']  | 
|
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1188  | 
count = len(checker.pending_keys)  | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1189  | 
bar.update("inventories", 0, 2)  | 
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1190  | 
current_keys = checker.pending_keys  | 
1191  | 
checker.pending_keys = {}  | 
|
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1192  | 
        # Accumulate current checks.
 | 
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1193  | 
for key in current_keys:  | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1194  | 
if key[0] != 'inventories' and key[0] not in kinds:  | 
1195  | 
checker._report_items.append('unknown key type %r' % (key,))  | 
|
1196  | 
keys[key[0]].add(key[1:])  | 
|
1197  | 
if keys['inventories']:  | 
|
1198  | 
            # NB: output order *should* be roughly sorted - topo or
 | 
|
1199  | 
            # inverse topo depending on repository - either way decent
 | 
|
1200  | 
            # to just delta against. However, pre-CHK formats didn't
 | 
|
1201  | 
            # try to optimise inventory layout on disk. As such the
 | 
|
1202  | 
            # pre-CHK code path does not use inventory deltas.
 | 
|
1203  | 
last_object = None  | 
|
1204  | 
for record in self.inventories.check(keys=keys['inventories']):  | 
|
1205  | 
if record.storage_kind == 'absent':  | 
|
1206  | 
checker._report_items.append(  | 
|
1207  | 
'Missing inventory {%s}' % (record.key,))  | 
|
1208  | 
else:  | 
|
1209  | 
last_object = self._check_record('inventories', record,  | 
|
1210  | 
checker, last_object,  | 
|
1211  | 
current_keys[('inventories',) + record.key])  | 
|
1212  | 
del keys['inventories']  | 
|
1213  | 
else:  | 
|
1214  | 
            return
 | 
|
1215  | 
bar.update("texts", 1)  | 
|
1216  | 
while (checker.pending_keys or keys['chk_bytes']  | 
|
1217  | 
or keys['texts']):  | 
|
1218  | 
            # Something to check.
 | 
|
1219  | 
current_keys = checker.pending_keys  | 
|
1220  | 
checker.pending_keys = {}  | 
|
1221  | 
            # Accumulate current checks.
 | 
|
1222  | 
for key in current_keys:  | 
|
1223  | 
if key[0] not in kinds:  | 
|
1224  | 
checker._report_items.append('unknown key type %r' % (key,))  | 
|
1225  | 
keys[key[0]].add(key[1:])  | 
|
1226  | 
            # Check the outermost kind only - inventories || chk_bytes || texts
 | 
|
1227  | 
for kind in kinds:  | 
|
1228  | 
if keys[kind]:  | 
|
1229  | 
last_object = None  | 
|
1230  | 
for record in getattr(self, kind).check(keys=keys[kind]):  | 
|
1231  | 
if record.storage_kind == 'absent':  | 
|
1232  | 
checker._report_items.append(  | 
|
| 
4657.1.1
by Robert Collins
 Do not add the root directory entry to the list of expected keys during check in non rich-root repositories. (#416732)  | 
1233  | 
'Missing %s {%s}' % (kind, record.key,))  | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1234  | 
else:  | 
1235  | 
last_object = self._check_record(kind, record,  | 
|
1236  | 
checker, last_object, current_keys[(kind,) + record.key])  | 
|
1237  | 
keys[kind] = set()  | 
|
1238  | 
                    break
 | 
|
1239  | 
||
1240  | 
def _check_record(self, kind, record, checker, last_object, item_data):  | 
|
1241  | 
"""Check a single text from this repository."""  | 
|
1242  | 
if kind == 'inventories':  | 
|
1243  | 
rev_id = record.key[0]  | 
|
| 
4988.3.3
by Jelmer Vernooij
 rename Repository.deserialise_inventory to Repository._deserialise_inventory.  | 
1244  | 
inv = self._deserialise_inventory(rev_id,  | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1245  | 
record.get_bytes_as('fulltext'))  | 
1246  | 
if last_object is not None:  | 
|
1247  | 
delta = inv._make_delta(last_object)  | 
|
1248  | 
for old_path, path, file_id, ie in delta:  | 
|
1249  | 
if ie is None:  | 
|
1250  | 
                        continue
 | 
|
1251  | 
ie.check(checker, rev_id, inv)  | 
|
1252  | 
else:  | 
|
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1253  | 
for path, ie in inv.iter_entries():  | 
| 
4332.3.28
by Robert Collins
 Start checking file texts in a single pass.  | 
1254  | 
ie.check(checker, rev_id, inv)  | 
1255  | 
if self._format.fast_deltas:  | 
|
1256  | 
return inv  | 
|
1257  | 
elif kind == 'chk_bytes':  | 
|
1258  | 
            # No code written to check chk_bytes for this repo format.
 | 
|
1259  | 
checker._report_items.append(  | 
|
1260  | 
'unsupported key type chk_bytes for %s' % (record.key,))  | 
|
1261  | 
elif kind == 'texts':  | 
|
1262  | 
self._check_text(record, checker, item_data)  | 
|
1263  | 
else:  | 
|
1264  | 
checker._report_items.append(  | 
|
1265  | 
'unknown key type %s for %s' % (kind, record.key))  | 
|
1266  | 
||
1267  | 
def _check_text(self, record, checker, item_data):  | 
|
1268  | 
"""Check a single text."""  | 
|
1269  | 
        # Check it is extractable.
 | 
|
1270  | 
        # TODO: check length.
 | 
|
1271  | 
if record.storage_kind == 'chunked':  | 
|
1272  | 
chunks = record.get_bytes_as(record.storage_kind)  | 
|
1273  | 
sha1 = osutils.sha_strings(chunks)  | 
|
1274  | 
length = sum(map(len, chunks))  | 
|
1275  | 
else:  | 
|
1276  | 
content = record.get_bytes_as('fulltext')  | 
|
1277  | 
sha1 = osutils.sha_string(content)  | 
|
1278  | 
length = len(content)  | 
|
1279  | 
if item_data and sha1 != item_data[1]:  | 
|
1280  | 
checker._report_items.append(  | 
|
1281  | 
'sha1 mismatch: %s has sha1 %s expected %s referenced by %s' %  | 
|
1282  | 
(record.key, sha1, item_data[1], item_data[2]))  | 
|
| 
4332.3.25
by Robert Collins
 Checkpointing refactoring of inventory/file checks.  | 
1283  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
1284  | 
    @staticmethod
 | 
1285  | 
def create(a_bzrdir):  | 
|
1286  | 
"""Construct the current default format repository in a_bzrdir."""  | 
|
1287  | 
return RepositoryFormat.get_default_format().initialize(a_bzrdir)  | 
|
1288  | 
||
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1289  | 
def __init__(self, _format, a_bzrdir, control_files):  | 
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
1290  | 
"""instantiate a Repository.  | 
1291  | 
||
1292  | 
        :param _format: The format of the repository on disk.
 | 
|
1293  | 
        :param a_bzrdir: The BzrDir of the repository.
 | 
|
1294  | 
        """
 | 
|
| 
5158.6.4
by Martin Pool
 Repository implements ControlComponent too  | 
1295  | 
        # In the future we will have a single api for all stores for
 | 
1296  | 
        # getting file texts, inventories and revisions, then
 | 
|
1297  | 
        # this construct will accept instances of those things.
 | 
|
| 
1608.2.1
by Martin Pool
 [merge] Storage filename escaping  | 
1298  | 
super(Repository, self).__init__()  | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
1299  | 
self._format = _format  | 
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
1300  | 
        # the following are part of the public API for Repository:
 | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
1301  | 
self.bzrdir = a_bzrdir  | 
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
1302  | 
self.control_files = control_files  | 
| 
3407.2.13
by Martin Pool
 Remove indirection through control_files to get transports  | 
1303  | 
self._transport = control_files._transport  | 
| 
3407.2.14
by Martin Pool
 Remove more cases of getting transport via control_files  | 
1304  | 
self.base = self._transport.base  | 
| 
2671.4.2
by Robert Collins
 Review feedback.  | 
1305  | 
        # for tests
 | 
1306  | 
self._reconcile_does_inventory_gc = True  | 
|
| 
2745.6.16
by Aaron Bentley
 Update from review  | 
1307  | 
self._reconcile_fixes_text_parents = False  | 
| 
2951.1.3
by Robert Collins
 Partial support for native reconcile with packs.  | 
1308  | 
self._reconcile_backsup_inventory = True  | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1309  | 
self._write_group = None  | 
| 
3221.12.1
by Robert Collins
 Backport development1 format (stackable packs) to before-shallow-branches.  | 
1310  | 
        # Additional places to query for data.
 | 
1311  | 
self._fallback_repositories = []  | 
|
| 
3882.6.23
by John Arbash Meinel
 Change the XMLSerializer.read_inventory_from_string api.  | 
1312  | 
        # An InventoryEntry cache, used during deserialization
 | 
1313  | 
self._inventory_entry_cache = fifo_cache.FIFOCache(10*1024)  | 
|
| 
4849.4.2
by John Arbash Meinel
 Change from being a per-serializer attribute to being a per-repo attribute.  | 
1314  | 
        # Is it safe to return inventory entries directly from the entry cache,
 | 
1315  | 
        # rather copying them?
 | 
|
1316  | 
self._safe_to_return_from_cache = False  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1317  | 
|
| 
5158.6.4
by Martin Pool
 Repository implements ControlComponent too  | 
1318  | 
    @property
 | 
1319  | 
def user_transport(self):  | 
|
1320  | 
return self.bzrdir.user_transport  | 
|
1321  | 
||
1322  | 
    @property
 | 
|
1323  | 
def control_transport(self):  | 
|
1324  | 
return self._transport  | 
|
1325  | 
||
| 
1668.1.3
by Martin Pool
 [patch] use the correct transaction when committing snapshot (Malone: #43959)  | 
1326  | 
def __repr__(self):  | 
| 
4509.3.7
by Martin Pool
 Show fallbacks in Repository repr  | 
1327  | 
if self._fallback_repositories:  | 
1328  | 
return '%s(%r, fallback_repositories=%r)' % (  | 
|
1329  | 
self.__class__.__name__,  | 
|
1330  | 
self.base,  | 
|
1331  | 
self._fallback_repositories)  | 
|
1332  | 
else:  | 
|
1333  | 
return '%s(%r)' % (self.__class__.__name__,  | 
|
1334  | 
self.base)  | 
|
| 
1668.1.3
by Martin Pool
 [patch] use the correct transaction when committing snapshot (Malone: #43959)  | 
1335  | 
|
| 
4509.3.37
by Martin Pool
 Remove RepositoryBase; make _has_same_location private  | 
1336  | 
def _has_same_fallbacks(self, other_repo):  | 
1337  | 
"""Returns true if the repositories have the same fallbacks."""  | 
|
1338  | 
my_fb = self._fallback_repositories  | 
|
1339  | 
other_fb = other_repo._fallback_repositories  | 
|
1340  | 
if len(my_fb) != len(other_fb):  | 
|
1341  | 
return False  | 
|
1342  | 
for f, g in zip(my_fb, other_fb):  | 
|
1343  | 
if not f.has_same_location(g):  | 
|
1344  | 
return False  | 
|
1345  | 
return True  | 
|
1346  | 
||
| 
2671.1.4
by Andrew Bennetts
 Rename is_same_repository to has_same_location, thanks Aaron!  | 
1347  | 
def has_same_location(self, other):  | 
| 
2671.1.3
by Andrew Bennetts
 Remove Repository.__eq__/__ne__ methods, replace with is_same_repository method.  | 
1348  | 
"""Returns a boolean indicating if this repository is at the same  | 
1349  | 
        location as another repository.
 | 
|
1350  | 
||
1351  | 
        This might return False even when two repository objects are accessing
 | 
|
1352  | 
        the same physical repository via different URLs.
 | 
|
1353  | 
        """
 | 
|
| 
2592.3.162
by Robert Collins
 Remove some arbitrary differences from bzr.dev.  | 
1354  | 
if self.__class__ is not other.__class__:  | 
1355  | 
return False  | 
|
| 
3407.2.3
by Martin Pool
 Branch and Repository use their own ._transport rather than going through .control_files  | 
1356  | 
return (self._transport.base == other._transport.base)  | 
| 
2671.1.1
by Andrew Bennetts
 Add support for comparing Repositories with == and != operators.  | 
1357  | 
|
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1358  | 
def is_in_write_group(self):  | 
1359  | 
"""Return True if there is an open write group.  | 
|
1360  | 
||
1361  | 
        :seealso: start_write_group.
 | 
|
1362  | 
        """
 | 
|
1363  | 
return self._write_group is not None  | 
|
1364  | 
||
| 
1694.2.6
by Martin Pool
 [merge] bzr.dev  | 
1365  | 
def is_locked(self):  | 
1366  | 
return self.control_files.is_locked()  | 
|
1367  | 
||
| 
2592.3.188
by Robert Collins
 Allow pack repositories to have multiple writers active at one time, for greater concurrency.  | 
1368  | 
def is_write_locked(self):  | 
1369  | 
"""Return True if this object is write locked."""  | 
|
1370  | 
return self.is_locked() and self.control_files._lock_mode == 'w'  | 
|
1371  | 
||
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1372  | 
def lock_write(self, token=None):  | 
1373  | 
"""Lock this repository for writing.  | 
|
| 
2617.6.8
by Robert Collins
 Review feedback and documentation.  | 
1374  | 
|
1375  | 
        This causes caching within the repository obejct to start accumlating
 | 
|
1376  | 
        data during reads, and allows a 'write_group' to be obtained. Write
 | 
|
1377  | 
        groups must be used for actual data insertion.
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1378  | 
|
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1379  | 
        :param token: if this is already locked, then lock_write will fail
 | 
1380  | 
            unless the token matches the existing lock.
 | 
|
1381  | 
        :returns: a token if this instance supports tokens, otherwise None.
 | 
|
1382  | 
        :raises TokenLockingNotSupported: when a token is given but this
 | 
|
1383  | 
            instance doesn't support using token locks.
 | 
|
1384  | 
        :raises MismatchedToken: if the specified token doesn't match the token
 | 
|
1385  | 
            of the existing lock.
 | 
|
| 
2617.6.8
by Robert Collins
 Review feedback and documentation.  | 
1386  | 
        :seealso: start_write_group.
 | 
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1387  | 
|
| 
2018.5.145
by Andrew Bennetts
 Add a brief explanation of what tokens are used for to lock_write docstrings.  | 
1388  | 
        A token should be passed in if you know that you have locked the object
 | 
1389  | 
        some other way, and need to synchronise this object's state with that
 | 
|
1390  | 
        fact.
 | 
|
1391  | 
||
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1392  | 
        XXX: this docstring is duplicated in many places, e.g. lockable_files.py
 | 
1393  | 
        """
 | 
|
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1394  | 
locked = self.is_locked()  | 
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1395  | 
result = self.control_files.lock_write(token=token)  | 
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1396  | 
if not locked:  | 
| 
4840.2.7
by Vincent Ladeuil
 Move the _warn_if_deprecated call from repo.__init__ to  | 
1397  | 
self._warn_if_deprecated()  | 
| 
4731.1.5
by Andrew Bennetts
 Fix typo.  | 
1398  | 
self._note_lock('w')  | 
| 
4379.2.1
by John Arbash Meinel
 Change the fallback repository code to only lock/unlock on transition.  | 
1399  | 
for repo in self._fallback_repositories:  | 
1400  | 
                # Writes don't affect fallback repos
 | 
|
1401  | 
repo.lock_read()  | 
|
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1402  | 
self._refresh_data()  | 
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1403  | 
return result  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1404  | 
|
1405  | 
def lock_read(self):  | 
|
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1406  | 
locked = self.is_locked()  | 
| 
1553.5.55
by Martin Pool
 [revert] broken changes  | 
1407  | 
self.control_files.lock_read()  | 
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1408  | 
if not locked:  | 
| 
4840.2.7
by Vincent Ladeuil
 Move the _warn_if_deprecated call from repo.__init__ to  | 
1409  | 
self._warn_if_deprecated()  | 
| 
4731.1.5
by Andrew Bennetts
 Fix typo.  | 
1410  | 
self._note_lock('r')  | 
| 
4379.2.1
by John Arbash Meinel
 Change the fallback repository code to only lock/unlock on transition.  | 
1411  | 
for repo in self._fallback_repositories:  | 
1412  | 
repo.lock_read()  | 
|
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1413  | 
self._refresh_data()  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1414  | 
|
| 
1694.2.6
by Martin Pool
 [merge] bzr.dev  | 
1415  | 
def get_physical_lock_status(self):  | 
1416  | 
return self.control_files.get_physical_lock_status()  | 
|
| 
1624.3.36
by Olaf Conradi
 Rename is_transport_locked() to get_physical_lock_status() as the  | 
1417  | 
|
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1418  | 
def leave_lock_in_place(self):  | 
1419  | 
"""Tell this repository not to release the physical lock when this  | 
|
1420  | 
        object is unlocked.
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1421  | 
|
| 
2018.5.76
by Andrew Bennetts
 Testing that repository.{dont_,}leave_lock_in_place raises NotImplementedError if lock_write returns None.  | 
1422  | 
        If lock_write doesn't return a token, then this method is not supported.
 | 
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1423  | 
        """
 | 
1424  | 
self.control_files.leave_in_place()  | 
|
1425  | 
||
1426  | 
def dont_leave_lock_in_place(self):  | 
|
1427  | 
"""Tell this repository to release the physical lock when this  | 
|
1428  | 
        object is unlocked, even if it didn't originally acquire it.
 | 
|
| 
2018.5.76
by Andrew Bennetts
 Testing that repository.{dont_,}leave_lock_in_place raises NotImplementedError if lock_write returns None.  | 
1429  | 
|
1430  | 
        If lock_write doesn't return a token, then this method is not supported.
 | 
|
| 
2018.5.75
by Andrew Bennetts
 Add Repository.{dont_,}leave_lock_in_place.  | 
1431  | 
        """
 | 
1432  | 
self.control_files.dont_leave_in_place()  | 
|
1433  | 
||
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1434  | 
    @needs_read_lock
 | 
| 
2258.1.2
by Robert Collins
 New version of gather_stats which gathers aggregate data too.  | 
1435  | 
def gather_stats(self, revid=None, committers=None):  | 
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1436  | 
"""Gather statistics from a revision id.  | 
1437  | 
||
| 
2258.1.2
by Robert Collins
 New version of gather_stats which gathers aggregate data too.  | 
1438  | 
        :param revid: The revision id to gather statistics from, if None, then
 | 
1439  | 
            no revision specific statistics are gathered.
 | 
|
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1440  | 
        :param committers: Optional parameter controlling whether to grab
 | 
| 
2258.1.2
by Robert Collins
 New version of gather_stats which gathers aggregate data too.  | 
1441  | 
            a count of committers from the revision specific statistics.
 | 
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1442  | 
        :return: A dictionary of statistics. Currently this contains:
 | 
1443  | 
            committers: The number of committers if requested.
 | 
|
1444  | 
            firstrev: A tuple with timestamp, timezone for the penultimate left
 | 
|
1445  | 
                most ancestor of revid, if revid is not the NULL_REVISION.
 | 
|
1446  | 
            latestrev: A tuple with timestamp, timezone for revid, if revid is
 | 
|
1447  | 
                not the NULL_REVISION.
 | 
|
| 
2258.1.2
by Robert Collins
 New version of gather_stats which gathers aggregate data too.  | 
1448  | 
            revisions: The total revision count in the repository.
 | 
1449  | 
            size: An estimate disk size of the repository in bytes.
 | 
|
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1450  | 
        """
 | 
1451  | 
result = {}  | 
|
| 
2258.1.2
by Robert Collins
 New version of gather_stats which gathers aggregate data too.  | 
1452  | 
if revid and committers:  | 
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1453  | 
result['committers'] = 0  | 
| 
2258.1.2
by Robert Collins
 New version of gather_stats which gathers aggregate data too.  | 
1454  | 
if revid and revid != _mod_revision.NULL_REVISION:  | 
1455  | 
if committers:  | 
|
1456  | 
all_committers = set()  | 
|
1457  | 
revisions = self.get_ancestry(revid)  | 
|
1458  | 
            # pop the leading None
 | 
|
1459  | 
revisions.pop(0)  | 
|
1460  | 
first_revision = None  | 
|
1461  | 
if not committers:  | 
|
1462  | 
                # ignore the revisions in the middle - just grab first and last
 | 
|
1463  | 
revisions = revisions[0], revisions[-1]  | 
|
1464  | 
for revision in self.get_revisions(revisions):  | 
|
1465  | 
if not first_revision:  | 
|
1466  | 
first_revision = revision  | 
|
1467  | 
if committers:  | 
|
1468  | 
all_committers.add(revision.committer)  | 
|
1469  | 
last_revision = revision  | 
|
1470  | 
if committers:  | 
|
1471  | 
result['committers'] = len(all_committers)  | 
|
1472  | 
result['firstrev'] = (first_revision.timestamp,  | 
|
1473  | 
first_revision.timezone)  | 
|
1474  | 
result['latestrev'] = (last_revision.timestamp,  | 
|
1475  | 
last_revision.timezone)  | 
|
1476  | 
||
1477  | 
        # now gather global repository information
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1478  | 
        # XXX: This is available for many repos regardless of listability.
 | 
| 
5158.6.10
by Martin Pool
 Update more code to use user_transport when it should  | 
1479  | 
if self.user_transport.listable():  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1480  | 
            # XXX: do we want to __define len__() ?
 | 
| 
3350.6.10
by Martin Pool
 VersionedFiles review cleanups  | 
1481  | 
            # Maybe the versionedfiles object should provide a different
 | 
1482  | 
            # method to get the number of keys.
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1483  | 
result['revisions'] = len(self.revisions.keys())  | 
1484  | 
            # result['size'] = t
 | 
|
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1485  | 
return result  | 
1486  | 
||
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1487  | 
def find_branches(self, using=False):  | 
1488  | 
"""Find branches underneath this repository.  | 
|
1489  | 
||
| 
3140.1.7
by Aaron Bentley
 Update docs  | 
1490  | 
        This will include branches inside other branches.
 | 
1491  | 
||
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1492  | 
        :param using: If True, list only branches using this repository.
 | 
1493  | 
        """
 | 
|
| 
3140.1.9
by Aaron Bentley
 Optimize find_branches for standalone repositories  | 
1494  | 
if using and not self.is_shared():  | 
| 
4997.1.2
by Jelmer Vernooij
 Use list_branches rather than open_branch in find_branches.  | 
1495  | 
return self.bzrdir.list_branches()  | 
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1496  | 
class Evaluator(object):  | 
1497  | 
||
1498  | 
def __init__(self):  | 
|
1499  | 
self.first_call = True  | 
|
1500  | 
||
1501  | 
def __call__(self, bzrdir):  | 
|
1502  | 
                # On the first call, the parameter is always the bzrdir
 | 
|
1503  | 
                # containing the current repo.
 | 
|
1504  | 
if not self.first_call:  | 
|
1505  | 
try:  | 
|
1506  | 
repository = bzrdir.open_repository()  | 
|
1507  | 
except errors.NoRepositoryPresent:  | 
|
1508  | 
                        pass
 | 
|
1509  | 
else:  | 
|
| 
4997.1.2
by Jelmer Vernooij
 Use list_branches rather than open_branch in find_branches.  | 
1510  | 
return False, ([], repository)  | 
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1511  | 
self.first_call = False  | 
| 
4997.1.2
by Jelmer Vernooij
 Use list_branches rather than open_branch in find_branches.  | 
1512  | 
value = (bzrdir.list_branches(), None)  | 
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1513  | 
return True, value  | 
1514  | 
||
| 
4997.1.2
by Jelmer Vernooij
 Use list_branches rather than open_branch in find_branches.  | 
1515  | 
ret = []  | 
1516  | 
for branches, repository in bzrdir.BzrDir.find_bzrdirs(  | 
|
| 
5158.6.10
by Martin Pool
 Update more code to use user_transport when it should  | 
1517  | 
self.user_transport, evaluate=Evaluator()):  | 
| 
4997.1.2
by Jelmer Vernooij
 Use list_branches rather than open_branch in find_branches.  | 
1518  | 
if branches is not None:  | 
1519  | 
ret.extend(branches)  | 
|
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1520  | 
if not using and repository is not None:  | 
| 
4997.1.2
by Jelmer Vernooij
 Use list_branches rather than open_branch in find_branches.  | 
1521  | 
ret.extend(repository.find_branches())  | 
1522  | 
return ret  | 
|
| 
3140.1.2
by Aaron Bentley
 Add ability to find branches inside repositories  | 
1523  | 
|
| 
2258.1.1
by Robert Collins
 Move info branch statistics gathering into the repository to allow smart server optimisation (Robert Collins).  | 
1524  | 
    @needs_read_lock
 | 
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
1525  | 
def search_missing_revision_ids(self, other, revision_id=None, find_ghosts=True):  | 
1526  | 
"""Return the revision ids that other has that this does not.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1527  | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
1528  | 
        These are returned in topological order.
 | 
1529  | 
||
1530  | 
        revision_id: only return revision ids included by revision_id.
 | 
|
1531  | 
        """
 | 
|
1532  | 
return InterRepository.get(other, self).search_missing_revision_ids(  | 
|
1533  | 
revision_id, find_ghosts)  | 
|
1534  | 
||
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
1535  | 
    @staticmethod
 | 
1536  | 
def open(base):  | 
|
1537  | 
"""Open the repository rooted at base.  | 
|
1538  | 
||
1539  | 
        For instance, if the repository is at URL/.bzr/repository,
 | 
|
1540  | 
        Repository.open(URL) -> a Repository instance.
 | 
|
1541  | 
        """
 | 
|
| 
1773.4.1
by Martin Pool
 Add pyflakes makefile target; fix many warnings  | 
1542  | 
control = bzrdir.BzrDir.open(base)  | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
1543  | 
return control.open_repository()  | 
1544  | 
||
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
1545  | 
def copy_content_into(self, destination, revision_id=None):  | 
| 
1534.6.6
by Robert Collins
 Move find_repository to bzrdir, its not quite ideal there but its simpler and until someone chooses to vary the search by branch type its completely sufficient.  | 
1546  | 
"""Make a complete copy of the content in self into destination.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1547  | 
|
1548  | 
        This is a destructive operation! Do not use it on existing
 | 
|
| 
1534.6.6
by Robert Collins
 Move find_repository to bzrdir, its not quite ideal there but its simpler and until someone chooses to vary the search by branch type its completely sufficient.  | 
1549  | 
        repositories.
 | 
1550  | 
        """
 | 
|
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
1551  | 
return InterRepository.get(self, destination).copy_content(revision_id)  | 
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1552  | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1553  | 
def commit_write_group(self):  | 
1554  | 
"""Commit the contents accrued within the current write group.  | 
|
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1555  | 
|
1556  | 
        :seealso: start_write_group.
 | 
|
| 
4476.3.70
by Andrew Bennetts
 Review tweaks.  | 
1557  | 
        
 | 
1558  | 
        :return: it may return an opaque hint that can be passed to 'pack'.
 | 
|
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1559  | 
        """
 | 
1560  | 
if self._write_group is not self.get_transaction():  | 
|
1561  | 
            # has an unlock or relock occured ?
 | 
|
| 
2592.3.38
by Robert Collins
 All experimental format tests passing again.  | 
1562  | 
raise errors.BzrError('mismatched lock context %r and '  | 
1563  | 
'write group %r.' %  | 
|
1564  | 
(self.get_transaction(), self._write_group))  | 
|
| 
4431.3.7
by Jonathan Lange
 Cherrypick bzr.dev 4470, resolving conflicts.  | 
1565  | 
result = self._commit_write_group()  | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1566  | 
self._write_group = None  | 
| 
4431.3.7
by Jonathan Lange
 Cherrypick bzr.dev 4470, resolving conflicts.  | 
1567  | 
return result  | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1568  | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1569  | 
def _commit_write_group(self):  | 
1570  | 
"""Template method for per-repository write group cleanup.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1571  | 
|
1572  | 
        This is called before the write group is considered to be
 | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1573  | 
        finished and should ensure that all data handed to the repository
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1574  | 
        for writing during the write group is safely committed (to the
 | 
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1575  | 
        extent possible considering file system caching etc).
 | 
1576  | 
        """
 | 
|
1577  | 
||
| 
4002.1.1
by Andrew Bennetts
 Implement suspend_write_group/resume_write_group.  | 
1578  | 
def suspend_write_group(self):  | 
1579  | 
raise errors.UnsuspendableWriteGroup(self)  | 
|
1580  | 
||
| 
4343.3.29
by John Arbash Meinel
 Add 'check_for_missing_texts' flag to get_missing_parent_inv..  | 
1581  | 
def get_missing_parent_inventories(self, check_for_missing_texts=True):  | 
| 
4257.4.6
by Andrew Bennetts
 Make get_missing_parent_inventories work for all repo formats (it's a no-op for unstackable formats).  | 
1582  | 
"""Return the keys of missing inventory parents for revisions added in  | 
1583  | 
        this write group.
 | 
|
1584  | 
||
1585  | 
        A revision is not complete if the inventory delta for that revision
 | 
|
1586  | 
        cannot be calculated.  Therefore if the parent inventories of a
 | 
|
1587  | 
        revision are not present, the revision is incomplete, and e.g. cannot
 | 
|
1588  | 
        be streamed by a smart server.  This method finds missing inventory
 | 
|
1589  | 
        parents for revisions added in this write group.
 | 
|
1590  | 
        """
 | 
|
1591  | 
if not self._format.supports_external_lookups:  | 
|
1592  | 
            # This is only an issue for stacked repositories
 | 
|
1593  | 
return set()  | 
|
| 
4257.4.10
by Andrew Bennetts
 Observe new revisions in _KnitGraphIndex.add_record rather than iterating all the uncommitted packs' indices.  | 
1594  | 
if not self.is_in_write_group():  | 
1595  | 
raise AssertionError('not in a write group')  | 
|
| 
4343.3.1
by John Arbash Meinel
 Set 'supports_external_lookups=True' for dev6 repositories.  | 
1596  | 
|
| 
4257.4.11
by Andrew Bennetts
 Polish the patch.  | 
1597  | 
        # XXX: We assume that every added revision already has its
 | 
1598  | 
        # corresponding inventory, so we only check for parent inventories that
 | 
|
1599  | 
        # might be missing, rather than all inventories.
 | 
|
1600  | 
parents = set(self.revisions._index.get_missing_parents())  | 
|
| 
4257.4.10
by Andrew Bennetts
 Observe new revisions in _KnitGraphIndex.add_record rather than iterating all the uncommitted packs' indices.  | 
1601  | 
parents.discard(_mod_revision.NULL_REVISION)  | 
| 
4257.4.7
by Andrew Bennetts
 Remove a little more cruft  | 
1602  | 
unstacked_inventories = self.inventories._index  | 
| 
4257.4.5
by Andrew Bennetts
 Refactor a little.  | 
1603  | 
present_inventories = unstacked_inventories.get_parent_map(  | 
| 
4257.4.10
by Andrew Bennetts
 Observe new revisions in _KnitGraphIndex.add_record rather than iterating all the uncommitted packs' indices.  | 
1604  | 
key[-1:] for key in parents)  | 
| 
4343.3.28
by John Arbash Meinel
 We only need to return the inventories we don't have.  | 
1605  | 
parents.difference_update(present_inventories)  | 
1606  | 
if len(parents) == 0:  | 
|
| 
4309.1.6
by Andrew Bennetts
 Exit get_missing_parent_inventories early (without checking texts) if there are no missing parent inventories.  | 
1607  | 
            # No missing parent inventories.
 | 
1608  | 
return set()  | 
|
| 
4343.3.29
by John Arbash Meinel
 Add 'check_for_missing_texts' flag to get_missing_parent_inv..  | 
1609  | 
if not check_for_missing_texts:  | 
1610  | 
return set(('inventories', rev_id) for (rev_id,) in parents)  | 
|
| 
4309.1.6
by Andrew Bennetts
 Exit get_missing_parent_inventories early (without checking texts) if there are no missing parent inventories.  | 
1611  | 
        # Ok, now we have a list of missing inventories.  But these only matter
 | 
| 
4309.1.2
by Andrew Bennetts
 Tentative fix for bug 368418: only fail the missing parent inventories check if there are missing texts that appear to be altered by the inventories with missing parents.  | 
1612  | 
        # if the inventories that reference them are missing some texts they
 | 
1613  | 
        # appear to introduce.
 | 
|
| 
4309.1.3
by Andrew Bennetts
 Start testing more cases, and start factoring those tests a little more clearly.  | 
1614  | 
        # XXX: Texts referenced by all added inventories need to be present,
 | 
| 
4309.1.5
by Andrew Bennetts
 Remove lots of cruft.  | 
1615  | 
        # but at the moment we're only checking for texts referenced by
 | 
1616  | 
        # inventories at the graph's edge.
 | 
|
| 
4309.1.6
by Andrew Bennetts
 Exit get_missing_parent_inventories early (without checking texts) if there are no missing parent inventories.  | 
1617  | 
key_deps = self.revisions._index._key_dependencies  | 
| 
4634.29.6
by Andrew Bennetts
 Put new key tracking in _KeyRefs rather than alongside it.  | 
1618  | 
key_deps.satisfy_refs_for_keys(present_inventories)  | 
| 
4309.1.3
by Andrew Bennetts
 Start testing more cases, and start factoring those tests a little more clearly.  | 
1619  | 
referrers = frozenset(r[0] for r in key_deps.get_referrers())  | 
1620  | 
file_ids = self.fileids_altered_by_revision_ids(referrers)  | 
|
| 
4309.1.2
by Andrew Bennetts
 Tentative fix for bug 368418: only fail the missing parent inventories check if there are missing texts that appear to be altered by the inventories with missing parents.  | 
1621  | 
missing_texts = set()  | 
1622  | 
for file_id, version_ids in file_ids.iteritems():  | 
|
1623  | 
missing_texts.update(  | 
|
1624  | 
(file_id, version_id) for version_id in version_ids)  | 
|
1625  | 
present_texts = self.texts.get_parent_map(missing_texts)  | 
|
1626  | 
missing_texts.difference_update(present_texts)  | 
|
1627  | 
if not missing_texts:  | 
|
| 
4309.1.6
by Andrew Bennetts
 Exit get_missing_parent_inventories early (without checking texts) if there are no missing parent inventories.  | 
1628  | 
            # No texts are missing, so all revisions and their deltas are
 | 
| 
4309.1.2
by Andrew Bennetts
 Tentative fix for bug 368418: only fail the missing parent inventories check if there are missing texts that appear to be altered by the inventories with missing parents.  | 
1629  | 
            # reconstructable.
 | 
1630  | 
return set()  | 
|
| 
4309.1.5
by Andrew Bennetts
 Remove lots of cruft.  | 
1631  | 
        # Alternatively the text versions could be returned as the missing
 | 
| 
4309.1.3
by Andrew Bennetts
 Start testing more cases, and start factoring those tests a little more clearly.  | 
1632  | 
        # keys, but this is likely to be less data.
 | 
| 
4257.4.10
by Andrew Bennetts
 Observe new revisions in _KnitGraphIndex.add_record rather than iterating all the uncommitted packs' indices.  | 
1633  | 
missing_keys = set(('inventories', rev_id) for (rev_id,) in parents)  | 
1634  | 
return missing_keys  | 
|
| 
4257.4.3
by Andrew Bennetts
 SinkStream.insert_stream checks for missing parent inventories, and reports them as missing_keys.  | 
1635  | 
|
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
1636  | 
def refresh_data(self):  | 
1637  | 
"""Re-read any data needed to to synchronise with disk.  | 
|
1638  | 
||
1639  | 
        This method is intended to be called after another repository instance
 | 
|
1640  | 
        (such as one used by a smart server) has inserted data into the
 | 
|
1641  | 
        repository. It may not be called during a write group, but may be
 | 
|
1642  | 
        called at any other time.
 | 
|
1643  | 
        """
 | 
|
1644  | 
if self.is_in_write_group():  | 
|
1645  | 
raise errors.InternalBzrError(  | 
|
1646  | 
"May not refresh_data while in a write group.")  | 
|
1647  | 
self._refresh_data()  | 
|
1648  | 
||
| 
4002.1.1
by Andrew Bennetts
 Implement suspend_write_group/resume_write_group.  | 
1649  | 
def resume_write_group(self, tokens):  | 
1650  | 
if not self.is_write_locked():  | 
|
1651  | 
raise errors.NotWriteLocked(self)  | 
|
1652  | 
if self._write_group:  | 
|
1653  | 
raise errors.BzrError('already in a write group')  | 
|
1654  | 
self._resume_write_group(tokens)  | 
|
1655  | 
        # so we can detect unlock/relock - the write group is now entered.
 | 
|
1656  | 
self._write_group = self.get_transaction()  | 
|
| 
4032.1.1
by John Arbash Meinel
 Merge the removal of all trailing whitespace, and resolve conflicts.  | 
1657  | 
|
| 
4002.1.1
by Andrew Bennetts
 Implement suspend_write_group/resume_write_group.  | 
1658  | 
def _resume_write_group(self, tokens):  | 
1659  | 
raise errors.UnsuspendableWriteGroup(self)  | 
|
1660  | 
||
| 
4070.9.2
by Andrew Bennetts
 Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.  | 
1661  | 
def fetch(self, source, revision_id=None, pb=None, find_ghosts=False,  | 
1662  | 
fetch_spec=None):  | 
|
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1663  | 
"""Fetch the content required to construct revision_id from source.  | 
1664  | 
||
| 
4070.9.14
by Andrew Bennetts
 Tweaks requested by Robert's review.  | 
1665  | 
        If revision_id is None and fetch_spec is None, then all content is
 | 
1666  | 
        copied.
 | 
|
1667  | 
||
| 
4145.1.1
by Robert Collins
 Explicitly prevent fetching while the target repository is in a write group.  | 
1668  | 
        fetch() may not be used when the repository is in a write group -
 | 
1669  | 
        either finish the current write group before using fetch, or use
 | 
|
1670  | 
        fetch before starting the write group.
 | 
|
1671  | 
||
| 
2949.1.1
by Robert Collins
 Change Repository.fetch to provide a find_ghosts parameter which triggers ghost filling.  | 
1672  | 
        :param find_ghosts: Find and copy revisions in the source that are
 | 
1673  | 
            ghosts in the target (and not reachable directly by walking out to
 | 
|
1674  | 
            the first-present revision in target from revision_id).
 | 
|
| 
4070.9.14
by Andrew Bennetts
 Tweaks requested by Robert's review.  | 
1675  | 
        :param revision_id: If specified, all the content needed for this
 | 
1676  | 
            revision ID will be copied to the target.  Fetch will determine for
 | 
|
1677  | 
            itself which content needs to be copied.
 | 
|
1678  | 
        :param fetch_spec: If specified, a SearchResult or
 | 
|
1679  | 
            PendingAncestryResult that describes which revisions to copy.  This
 | 
|
1680  | 
            allows copying multiple heads at once.  Mutually exclusive with
 | 
|
1681  | 
            revision_id.
 | 
|
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1682  | 
        """
 | 
| 
4070.9.2
by Andrew Bennetts
 Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.  | 
1683  | 
if fetch_spec is not None and revision_id is not None:  | 
1684  | 
raise AssertionError(  | 
|
1685  | 
"fetch_spec and revision_id are mutually exclusive.")  | 
|
| 
4145.1.1
by Robert Collins
 Explicitly prevent fetching while the target repository is in a write group.  | 
1686  | 
if self.is_in_write_group():  | 
| 
4145.1.3
by Robert Collins
 NEWS conflicts.  | 
1687  | 
raise errors.InternalBzrError(  | 
1688  | 
"May not fetch while in a write group.")  | 
|
| 
2592.3.115
by Robert Collins
 Move same repository check up to Repository.fetch to allow all fetch implementations to benefit.  | 
1689  | 
        # fast path same-url fetch operations
 | 
| 
4509.3.20
by Martin Pool
 Repository.fetch also considers fallbacks in deciding whether to fetch  | 
1690  | 
        # TODO: lift out to somewhere common with RemoteRepository
 | 
| 
4509.3.37
by Martin Pool
 Remove RepositoryBase; make _has_same_location private  | 
1691  | 
        # <https://bugs.edge.launchpad.net/bzr/+bug/401646>
 | 
| 
4509.3.32
by Martin Pool
 Split out RepositoryBase.has_same_fallbacks  | 
1692  | 
if (self.has_same_location(source)  | 
1693  | 
and fetch_spec is None  | 
|
| 
4509.3.37
by Martin Pool
 Remove RepositoryBase; make _has_same_location private  | 
1694  | 
and self._has_same_fallbacks(source)):  | 
| 
2592.3.115
by Robert Collins
 Move same repository check up to Repository.fetch to allow all fetch implementations to benefit.  | 
1695  | 
            # check that last_revision is in 'from' and then return a
 | 
1696  | 
            # no-operation.
 | 
|
1697  | 
if (revision_id is not None and  | 
|
1698  | 
not _mod_revision.is_null(revision_id)):  | 
|
1699  | 
self.get_revision(revision_id)  | 
|
1700  | 
return 0, []  | 
|
| 
3582.1.3
by Martin Pool
 Repository.fetch no longer needs to translate NotImplementedErro to IncompatibleRepositories  | 
1701  | 
        # if there is no specific appropriate InterRepository, this will get
 | 
1702  | 
        # the InterRepository base class, which raises an
 | 
|
1703  | 
        # IncompatibleRepositories when asked to fetch.
 | 
|
| 
2323.8.3
by Aaron Bentley
 Reduce scope of try/except, update NEWS  | 
1704  | 
inter = InterRepository.get(source, self)  | 
| 
3582.1.3
by Martin Pool
 Repository.fetch no longer needs to translate NotImplementedErro to IncompatibleRepositories  | 
1705  | 
return inter.fetch(revision_id=revision_id, pb=pb,  | 
| 
4070.9.2
by Andrew Bennetts
 Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.  | 
1706  | 
find_ghosts=find_ghosts, fetch_spec=fetch_spec)  | 
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1707  | 
|
| 
2520.4.54
by Aaron Bentley
 Hang a create_bundle method off repository  | 
1708  | 
def create_bundle(self, target, base, fileobj, format=None):  | 
1709  | 
return serializer.write_bundle(self, target, base, fileobj, format)  | 
|
1710  | 
||
| 
2803.2.1
by Robert Collins
 * CommitBuilder now advertises itself as requiring the root entry to be  | 
1711  | 
def get_commit_builder(self, branch, parents, config, timestamp=None,  | 
1712  | 
timezone=None, committer=None, revprops=None,  | 
|
| 
1740.3.7
by Jelmer Vernooij
 Move committer, log, revprops, timestamp and timezone to CommitBuilder.  | 
1713  | 
revision_id=None):  | 
1714  | 
"""Obtain a CommitBuilder for this repository.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1715  | 
|
| 
1740.3.7
by Jelmer Vernooij
 Move committer, log, revprops, timestamp and timezone to CommitBuilder.  | 
1716  | 
        :param branch: Branch to commit to.
 | 
1717  | 
        :param parents: Revision ids of the parents of the new revision.
 | 
|
1718  | 
        :param config: Configuration to use.
 | 
|
1719  | 
        :param timestamp: Optional timestamp recorded for commit.
 | 
|
1720  | 
        :param timezone: Optional timezone for timestamp.
 | 
|
1721  | 
        :param committer: Optional committer to set for commit.
 | 
|
1722  | 
        :param revprops: Optional dictionary of revision properties.
 | 
|
1723  | 
        :param revision_id: Optional revision id.
 | 
|
1724  | 
        """
 | 
|
| 
4595.4.2
by Robert Collins
 Disable commit builders on stacked repositories.  | 
1725  | 
if self._fallback_repositories:  | 
1726  | 
raise errors.BzrError("Cannot commit from a lightweight checkout "  | 
|
1727  | 
                "to a stacked branch. See "
 | 
|
1728  | 
"https://bugs.launchpad.net/bzr/+bug/375013 for details.")  | 
|
| 
2818.3.2
by Robert Collins
 Review feedback.  | 
1729  | 
result = self._commit_builder_class(self, parents, config,  | 
| 
2592.3.135
by Robert Collins
 Do not create many transient knit objects, saving 4% on commit.  | 
1730  | 
timestamp, timezone, committer, revprops, revision_id)  | 
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1731  | 
self.start_write_group()  | 
1732  | 
return result  | 
|
| 
1740.3.1
by Jelmer Vernooij
 Introduce and use CommitBuilder objects.  | 
1733  | 
|
| 
4634.85.10
by Andrew Bennetts
 Change test_unlock_in_write_group to expect a log_exception_quietly rather than a raise.  | 
1734  | 
@only_raises(errors.LockNotHeld, errors.LockBroken)  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1735  | 
def unlock(self):  | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1736  | 
if (self.control_files._lock_count == 1 and  | 
1737  | 
self.control_files._lock_mode == 'w'):  | 
|
1738  | 
if self._write_group is not None:  | 
|
| 
2592.3.244
by Martin Pool
 unlock while in a write group now aborts the write group, unlocks, and errors.  | 
1739  | 
self.abort_write_group()  | 
1740  | 
self.control_files.unlock()  | 
|
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1741  | 
raise errors.BzrError(  | 
1742  | 
'Must end write groups before releasing write locks.')  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1743  | 
self.control_files.unlock()  | 
| 
3882.6.23
by John Arbash Meinel
 Change the XMLSerializer.read_inventory_from_string api.  | 
1744  | 
if self.control_files._lock_count == 0:  | 
1745  | 
self._inventory_entry_cache.clear()  | 
|
| 
4379.2.1
by John Arbash Meinel
 Change the fallback repository code to only lock/unlock on transition.  | 
1746  | 
for repo in self._fallback_repositories:  | 
1747  | 
repo.unlock()  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1748  | 
|
| 
1185.65.27
by Robert Collins
 Tweak storage towards mergability.  | 
1749  | 
    @needs_read_lock
 | 
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
1750  | 
def clone(self, a_bzrdir, revision_id=None):  | 
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1751  | 
"""Clone this repository into a_bzrdir using the current format.  | 
1752  | 
||
1753  | 
        Currently no check is made that the format of this repository and
 | 
|
1754  | 
        the bzrdir format are compatible. FIXME RBC 20060201.
 | 
|
| 
2241.1.4
by Martin Pool
 Moved old weave-based repository formats into bzrlib.repofmt.weaverepo.  | 
1755  | 
|
1756  | 
        :return: The newly created destination repository.
 | 
|
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1757  | 
        """
 | 
| 
2440.1.1
by Martin Pool
 Add new Repository.sprout,  | 
1758  | 
        # TODO: deprecate after 0.16; cloning this with all its settings is
 | 
1759  | 
        # probably not very useful -- mbp 20070423
 | 
|
1760  | 
dest_repo = self._create_sprouting_repo(a_bzrdir, shared=self.is_shared())  | 
|
1761  | 
self.copy_content_into(dest_repo, revision_id)  | 
|
1762  | 
return dest_repo  | 
|
1763  | 
||
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1764  | 
def start_write_group(self):  | 
1765  | 
"""Start a write group in the repository.  | 
|
1766  | 
||
1767  | 
        Write groups are used by repositories which do not have a 1:1 mapping
 | 
|
1768  | 
        between file ids and backend store to manage the insertion of data from
 | 
|
1769  | 
        both fetch and commit operations.
 | 
|
1770  | 
||
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1771  | 
        A write lock is required around the start_write_group/commit_write_group
 | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1772  | 
        for the support of lock-requiring repository formats.
 | 
| 
2617.6.8
by Robert Collins
 Review feedback and documentation.  | 
1773  | 
|
1774  | 
        One can only insert data into a repository inside a write group.
 | 
|
1775  | 
||
| 
2617.6.6
by Robert Collins
 Some review feedback.  | 
1776  | 
        :return: None.
 | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1777  | 
        """
 | 
| 
2592.3.188
by Robert Collins
 Allow pack repositories to have multiple writers active at one time, for greater concurrency.  | 
1778  | 
if not self.is_write_locked():  | 
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1779  | 
raise errors.NotWriteLocked(self)  | 
1780  | 
if self._write_group:  | 
|
1781  | 
raise errors.BzrError('already in a write group')  | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1782  | 
self._start_write_group()  | 
1783  | 
        # so we can detect unlock/relock - the write group is now entered.
 | 
|
| 
2617.6.1
by Robert Collins
 * New method on Repository - ``start_write_group``, ``end_write_group``  | 
1784  | 
self._write_group = self.get_transaction()  | 
1785  | 
||
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1786  | 
def _start_write_group(self):  | 
1787  | 
"""Template method for per-repository write group startup.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1788  | 
|
1789  | 
        This is called before the write group is considered to be
 | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
1790  | 
        entered.
 | 
1791  | 
        """
 | 
|
1792  | 
||
| 
2440.1.1
by Martin Pool
 Add new Repository.sprout,  | 
1793  | 
    @needs_read_lock
 | 
1794  | 
def sprout(self, to_bzrdir, revision_id=None):  | 
|
1795  | 
"""Create a descendent repository for new development.  | 
|
1796  | 
||
1797  | 
        Unlike clone, this does not copy the settings of the repository.
 | 
|
1798  | 
        """
 | 
|
1799  | 
dest_repo = self._create_sprouting_repo(to_bzrdir, shared=False)  | 
|
1800  | 
dest_repo.fetch(self, revision_id=revision_id)  | 
|
1801  | 
return dest_repo  | 
|
1802  | 
||
1803  | 
def _create_sprouting_repo(self, a_bzrdir, shared):  | 
|
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1804  | 
if not isinstance(a_bzrdir._format, self.bzrdir._format.__class__):  | 
1805  | 
            # use target default format.
 | 
|
| 
2241.1.4
by Martin Pool
 Moved old weave-based repository formats into bzrlib.repofmt.weaverepo.  | 
1806  | 
dest_repo = a_bzrdir.create_repository()  | 
| 
1534.4.50
by Robert Collins
 Got the bzrdir api straightened out, plenty of refactoring to use it pending, but the api is up and running.  | 
1807  | 
else:  | 
| 
2241.1.4
by Martin Pool
 Moved old weave-based repository formats into bzrlib.repofmt.weaverepo.  | 
1808  | 
            # Most control formats need the repository to be specifically
 | 
1809  | 
            # created, but on some old all-in-one formats it's not needed
 | 
|
1810  | 
try:  | 
|
| 
2440.1.1
by Martin Pool
 Add new Repository.sprout,  | 
1811  | 
dest_repo = self._format.initialize(a_bzrdir, shared=shared)  | 
| 
2241.1.4
by Martin Pool
 Moved old weave-based repository formats into bzrlib.repofmt.weaverepo.  | 
1812  | 
except errors.UninitializableFormat:  | 
1813  | 
dest_repo = a_bzrdir.open_repository()  | 
|
1814  | 
return dest_repo  | 
|
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1815  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
1816  | 
def _get_sink(self):  | 
1817  | 
"""Return a sink for streaming into this repository."""  | 
|
1818  | 
return StreamSink(self)  | 
|
1819  | 
||
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
1820  | 
def _get_source(self, to_format):  | 
1821  | 
"""Return a source for streaming from this repository."""  | 
|
1822  | 
return StreamSource(self, to_format)  | 
|
1823  | 
||
| 
1563.2.22
by Robert Collins
 Move responsibility for repository.has_revision into RevisionStore  | 
1824  | 
    @needs_read_lock
 | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1825  | 
def has_revision(self, revision_id):  | 
| 
1563.2.22
by Robert Collins
 Move responsibility for repository.has_revision into RevisionStore  | 
1826  | 
"""True if this repository has a copy of the revision."""  | 
| 
3172.3.1
by Robert Collins
 Repository has a new method ``has_revisions`` which signals the presence  | 
1827  | 
return revision_id in self.has_revisions((revision_id,))  | 
1828  | 
||
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1829  | 
    @needs_read_lock
 | 
| 
3172.3.1
by Robert Collins
 Repository has a new method ``has_revisions`` which signals the presence  | 
1830  | 
def has_revisions(self, revision_ids):  | 
1831  | 
"""Probe to find out the presence of multiple revisions.  | 
|
1832  | 
||
1833  | 
        :param revision_ids: An iterable of revision_ids.
 | 
|
1834  | 
        :return: A set of the revision_ids that were present.
 | 
|
1835  | 
        """
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1836  | 
parent_map = self.revisions.get_parent_map(  | 
1837  | 
[(rev_id,) for rev_id in revision_ids])  | 
|
1838  | 
result = set()  | 
|
1839  | 
if _mod_revision.NULL_REVISION in revision_ids:  | 
|
1840  | 
result.add(_mod_revision.NULL_REVISION)  | 
|
1841  | 
result.update([key[0] for key in parent_map])  | 
|
1842  | 
return result  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1843  | 
|
| 
1185.65.27
by Robert Collins
 Tweak storage towards mergability.  | 
1844  | 
    @needs_read_lock
 | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
1845  | 
def get_revision(self, revision_id):  | 
1846  | 
"""Return the Revision object for a named revision."""  | 
|
1847  | 
return self.get_revisions([revision_id])[0]  | 
|
1848  | 
||
1849  | 
    @needs_read_lock
 | 
|
| 
1570.1.13
by Robert Collins
 Check for incorrect revision parentage in the weave during revision access.  | 
1850  | 
def get_revision_reconcile(self, revision_id):  | 
1851  | 
"""'reconcile' helper routine that allows access to a revision always.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1852  | 
|
| 
1570.1.13
by Robert Collins
 Check for incorrect revision parentage in the weave during revision access.  | 
1853  | 
        This variant of get_revision does not cross check the weave graph
 | 
1854  | 
        against the revision one as get_revision does: but it should only
 | 
|
1855  | 
        be used by reconcile, or reconcile-alike commands that are correcting
 | 
|
1856  | 
        or testing the revision graph.
 | 
|
1857  | 
        """
 | 
|
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
1858  | 
return self._get_revisions([revision_id])[0]  | 
| 
2249.5.13
by John Arbash Meinel
 Finish auditing Repository, and fix generate_ids to always generate utf8 ids.  | 
1859  | 
|
| 
1756.1.2
by Aaron Bentley
 Show logs using get_revisions  | 
1860  | 
    @needs_read_lock
 | 
1861  | 
def get_revisions(self, revision_ids):  | 
|
| 
4332.3.16
by Robert Collins
 Refactor Repository._find_inconsistent_revision_parents and Repository.get_revisions to a new Repository._iter_revisions which is kinder on memory without needing code duplication.  | 
1862  | 
"""Get many revisions at once.  | 
1863  | 
        
 | 
|
1864  | 
        Repositories that need to check data on every revision read should 
 | 
|
1865  | 
        subclass this method.
 | 
|
1866  | 
        """
 | 
|
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
1867  | 
return self._get_revisions(revision_ids)  | 
1868  | 
||
1869  | 
    @needs_read_lock
 | 
|
1870  | 
def _get_revisions(self, revision_ids):  | 
|
1871  | 
"""Core work logic to get many revisions without sanity checks."""  | 
|
| 
4332.3.16
by Robert Collins
 Refactor Repository._find_inconsistent_revision_parents and Repository.get_revisions to a new Repository._iter_revisions which is kinder on memory without needing code duplication.  | 
1872  | 
revs = {}  | 
1873  | 
for revid, rev in self._iter_revisions(revision_ids):  | 
|
1874  | 
if rev is None:  | 
|
1875  | 
raise errors.NoSuchRevision(self, revid)  | 
|
1876  | 
revs[revid] = rev  | 
|
1877  | 
return [revs[revid] for revid in revision_ids]  | 
|
1878  | 
||
1879  | 
def _iter_revisions(self, revision_ids):  | 
|
1880  | 
"""Iterate over revision objects.  | 
|
1881  | 
||
1882  | 
        :param revision_ids: An iterable of revisions to examine. None may be
 | 
|
1883  | 
            passed to request all revisions known to the repository. Note that
 | 
|
1884  | 
            not all repositories can find unreferenced revisions; for those
 | 
|
1885  | 
            repositories only referenced ones will be returned.
 | 
|
1886  | 
        :return: An iterator of (revid, revision) tuples. Absent revisions (
 | 
|
1887  | 
            those asked for but not available) are returned as (revid, None).
 | 
|
1888  | 
        """
 | 
|
1889  | 
if revision_ids is None:  | 
|
1890  | 
revision_ids = self.all_revision_ids()  | 
|
1891  | 
else:  | 
|
1892  | 
for rev_id in revision_ids:  | 
|
1893  | 
if not rev_id or not isinstance(rev_id, basestring):  | 
|
1894  | 
raise errors.InvalidRevisionId(revision_id=rev_id, branch=self)  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1895  | 
keys = [(key,) for key in revision_ids]  | 
1896  | 
stream = self.revisions.get_record_stream(keys, 'unordered', True)  | 
|
1897  | 
for record in stream:  | 
|
| 
4332.3.16
by Robert Collins
 Refactor Repository._find_inconsistent_revision_parents and Repository.get_revisions to a new Repository._iter_revisions which is kinder on memory without needing code duplication.  | 
1898  | 
revid = record.key[0]  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1899  | 
if record.storage_kind == 'absent':  | 
| 
4332.3.16
by Robert Collins
 Refactor Repository._find_inconsistent_revision_parents and Repository.get_revisions to a new Repository._iter_revisions which is kinder on memory without needing code duplication.  | 
1900  | 
yield (revid, None)  | 
1901  | 
else:  | 
|
1902  | 
text = record.get_bytes_as('fulltext')  | 
|
1903  | 
rev = self._serializer.read_revision_from_string(text)  | 
|
1904  | 
yield (revid, rev)  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1905  | 
|
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1906  | 
def get_deltas_for_revisions(self, revisions, specific_fileids=None):  | 
| 
1756.3.19
by Aaron Bentley
 Documentation and cleanups  | 
1907  | 
"""Produce a generator of revision deltas.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
1908  | 
|
| 
1756.3.19
by Aaron Bentley
 Documentation and cleanups  | 
1909  | 
        Note that the input is a sequence of REVISIONS, not revision_ids.
 | 
1910  | 
        Trees will be held in memory until the generator exits.
 | 
|
1911  | 
        Each delta is relative to the revision's lefthand predecessor.
 | 
|
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1912  | 
|
1913  | 
        :param specific_fileids: if not None, the result is filtered
 | 
|
1914  | 
          so that only those file-ids, their parents and their
 | 
|
1915  | 
          children are included.
 | 
|
| 
1756.3.19
by Aaron Bentley
 Documentation and cleanups  | 
1916  | 
        """
 | 
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1917  | 
        # Get the revision-ids of interest
 | 
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
1918  | 
required_trees = set()  | 
1919  | 
for revision in revisions:  | 
|
1920  | 
required_trees.add(revision.revision_id)  | 
|
1921  | 
required_trees.update(revision.parent_ids[:1])  | 
|
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1922  | 
|
1923  | 
        # Get the matching filtered trees. Note that it's more
 | 
|
1924  | 
        # efficient to pass filtered trees to changes_from() rather
 | 
|
1925  | 
        # than doing the filtering afterwards. changes_from() could
 | 
|
1926  | 
        # arguably do the filtering itself but it's path-based, not
 | 
|
1927  | 
        # file-id based, so filtering before or afterwards is
 | 
|
1928  | 
        # currently easier.
 | 
|
1929  | 
if specific_fileids is None:  | 
|
1930  | 
trees = dict((t.get_revision_id(), t) for  | 
|
1931  | 
t in self.revision_trees(required_trees))  | 
|
1932  | 
else:  | 
|
1933  | 
trees = dict((t.get_revision_id(), t) for  | 
|
1934  | 
t in self._filtered_revision_trees(required_trees,  | 
|
1935  | 
specific_fileids))  | 
|
1936  | 
||
1937  | 
        # Calculate the deltas
 | 
|
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
1938  | 
for revision in revisions:  | 
1939  | 
if not revision.parent_ids:  | 
|
| 
3668.5.1
by Jelmer Vernooij
 Use NULL_REVISION rather than None for Repository.revision_tree().  | 
1940  | 
old_tree = self.revision_tree(_mod_revision.NULL_REVISION)  | 
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
1941  | 
else:  | 
1942  | 
old_tree = trees[revision.parent_ids[0]]  | 
|
| 
1852.10.3
by Robert Collins
 Remove all uses of compare_trees and replace with Tree.changes_from throughout bzrlib.  | 
1943  | 
yield trees[revision.revision_id].changes_from(old_tree)  | 
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
1944  | 
|
| 
1756.3.19
by Aaron Bentley
 Documentation and cleanups  | 
1945  | 
    @needs_read_lock
 | 
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1946  | 
def get_revision_delta(self, revision_id, specific_fileids=None):  | 
| 
1744.2.2
by Johan Rydberg
 Add get_revision_delta to Repository; and make Branch.get_revision_delta use it.  | 
1947  | 
"""Return the delta for one revision.  | 
1948  | 
||
1949  | 
        The delta is relative to the left-hand predecessor of the
 | 
|
1950  | 
        revision.
 | 
|
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1951  | 
|
1952  | 
        :param specific_fileids: if not None, the result is filtered
 | 
|
1953  | 
          so that only those file-ids, their parents and their
 | 
|
1954  | 
          children are included.
 | 
|
| 
1744.2.2
by Johan Rydberg
 Add get_revision_delta to Repository; and make Branch.get_revision_delta use it.  | 
1955  | 
        """
 | 
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
1956  | 
r = self.get_revision(revision_id)  | 
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
1957  | 
return list(self.get_deltas_for_revisions([r],  | 
1958  | 
specific_fileids=specific_fileids))[0]  | 
|
| 
1744.2.2
by Johan Rydberg
 Add get_revision_delta to Repository; and make Branch.get_revision_delta use it.  | 
1959  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1960  | 
    @needs_write_lock
 | 
1961  | 
def store_revision_signature(self, gpg_strategy, plaintext, revision_id):  | 
|
| 
1563.2.29
by Robert Collins
 Remove all but fetch references to repository.revision_store.  | 
1962  | 
signature = gpg_strategy.sign(plaintext)  | 
| 
2996.2.4
by Aaron Bentley
 Rename function to add_signature_text  | 
1963  | 
self.add_signature_text(revision_id, signature)  | 
| 
2996.2.3
by Aaron Bentley
 Add tests for install_revisions and add_signature  | 
1964  | 
|
1965  | 
    @needs_write_lock
 | 
|
| 
2996.2.4
by Aaron Bentley
 Rename function to add_signature_text  | 
1966  | 
def add_signature_text(self, revision_id, signature):  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1967  | 
self.signatures.add_lines((revision_id,), (),  | 
1968  | 
osutils.split_lines(signature))  | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
1969  | 
|
| 
2988.1.2
by Robert Collins
 New Repository API find_text_key_references for use by reconcile and check.  | 
1970  | 
def find_text_key_references(self):  | 
1971  | 
"""Find the text key references within the repository.  | 
|
1972  | 
||
1973  | 
        :return: A dictionary mapping text keys ((fileid, revision_id) tuples)
 | 
|
1974  | 
            to whether they were referred to by the inventory of the
 | 
|
1975  | 
            revision_id that they contain. The inventory texts from all present
 | 
|
1976  | 
            revision ids are assessed to generate this report.
 | 
|
1977  | 
        """
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1978  | 
revision_keys = self.revisions.keys()  | 
1979  | 
w = self.inventories  | 
|
| 
2988.1.2
by Robert Collins
 New Repository API find_text_key_references for use by reconcile and check.  | 
1980  | 
pb = ui.ui_factory.nested_progress_bar()  | 
1981  | 
try:  | 
|
1982  | 
return self._find_text_key_references_from_xml_inventory_lines(  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
1983  | 
w.iter_lines_added_or_present_in_keys(revision_keys, pb=pb))  | 
| 
2988.1.2
by Robert Collins
 New Repository API find_text_key_references for use by reconcile and check.  | 
1984  | 
finally:  | 
1985  | 
pb.finished()  | 
|
1986  | 
||
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
1987  | 
def _find_text_key_references_from_xml_inventory_lines(self,  | 
1988  | 
line_iterator):  | 
|
1989  | 
"""Core routine for extracting references to texts from inventories.  | 
|
| 
2592.3.110
by Robert Collins
 Filter out texts and signatures not referenced by the revisions being copied during pack to pack fetching.  | 
1990  | 
|
1991  | 
        This performs the translation of xml lines to revision ids.
 | 
|
1992  | 
||
| 
2975.3.1
by Robert Collins
 Change (without backwards compatibility) the  | 
1993  | 
        :param line_iterator: An iterator of lines, origin_version_id
 | 
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
1994  | 
        :return: A dictionary mapping text keys ((fileid, revision_id) tuples)
 | 
1995  | 
            to whether they were referred to by the inventory of the
 | 
|
1996  | 
            revision_id that they contain. Note that if that revision_id was
 | 
|
1997  | 
            not part of the line_iterator's output then False will be given -
 | 
|
1998  | 
            even though it may actually refer to that key.
 | 
|
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
1999  | 
        """
 | 
| 
2988.2.2
by Robert Collins
 Review feedback.  | 
2000  | 
if not self._serializer.support_altered_by_hack:  | 
2001  | 
raise AssertionError(  | 
|
2002  | 
                "_find_text_key_references_from_xml_inventory_lines only "
 | 
|
2003  | 
                "supported for branches which store inventory as unnested xml"
 | 
|
2004  | 
", not on %r" % self)  | 
|
| 
1694.2.6
by Martin Pool
 [merge] bzr.dev  | 
2005  | 
result = {}  | 
| 
1563.2.35
by Robert Collins
 cleanup deprecation warnings and finish conversion so the inventory is knit based too.  | 
2006  | 
|
| 
1694.2.6
by Martin Pool
 [merge] bzr.dev  | 
2007  | 
        # this code needs to read every new line in every inventory for the
 | 
2008  | 
        # inventories [revision_ids]. Seeing a line twice is ok. Seeing a line
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2009  | 
        # not present in one of those inventories is unnecessary but not
 | 
| 
1594.2.6
by Robert Collins
 Introduce a api specifically for looking at lines in some versions of the inventory, for fileid_involved.  | 
2010  | 
        # harmful because we are filtering by the revision id marker in the
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2011  | 
        # inventory lines : we only select file ids altered in one of those
 | 
| 
1759.2.2
by Jelmer Vernooij
 Revert some of my spelling fixes and fix some typos after review by Aaron.  | 
2012  | 
        # revisions. We don't need to see all lines in the inventory because
 | 
| 
1594.2.6
by Robert Collins
 Introduce a api specifically for looking at lines in some versions of the inventory, for fileid_involved.  | 
2013  | 
        # only those added in an inventory in rev X can contain a revision=X
 | 
2014  | 
        # line.
 | 
|
| 
2163.2.3
by John Arbash Meinel
 Change to local variables to save another 300ms  | 
2015  | 
unescape_revid_cache = {}  | 
2016  | 
unescape_fileid_cache = {}  | 
|
2017  | 
||
| 
2163.2.5
by John Arbash Meinel
 Inline the cache lookup, and explain why  | 
2018  | 
        # jam 20061218 In a big fetch, this handles hundreds of thousands
 | 
2019  | 
        # of lines, so it has had a lot of inlining and optimizing done.
 | 
|
2020  | 
        # Sorry that it is a little bit messy.
 | 
|
| 
2163.2.3
by John Arbash Meinel
 Change to local variables to save another 300ms  | 
2021  | 
        # Move several functions to be local variables, since this is a long
 | 
2022  | 
        # running loop.
 | 
|
2023  | 
search = self._file_ids_altered_regex.search  | 
|
| 
2163.2.5
by John Arbash Meinel
 Inline the cache lookup, and explain why  | 
2024  | 
unescape = _unescape_xml  | 
| 
2163.2.3
by John Arbash Meinel
 Change to local variables to save another 300ms  | 
2025  | 
setdefault = result.setdefault  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2026  | 
for line, line_key in line_iterator:  | 
| 
2592.3.110
by Robert Collins
 Filter out texts and signatures not referenced by the revisions being copied during pack to pack fetching.  | 
2027  | 
match = search(line)  | 
2028  | 
if match is None:  | 
|
2029  | 
                continue
 | 
|
2030  | 
            # One call to match.group() returning multiple items is quite a
 | 
|
2031  | 
            # bit faster than 2 calls to match.group() each returning 1
 | 
|
2032  | 
file_id, revision_id = match.group('file_id', 'revision_id')  | 
|
2033  | 
||
2034  | 
            # Inlining the cache lookups helps a lot when you make 170,000
 | 
|
2035  | 
            # lines and 350k ids, versus 8.4 unique ids.
 | 
|
2036  | 
            # Using a cache helps in 2 ways:
 | 
|
2037  | 
            #   1) Avoids unnecessary decoding calls
 | 
|
2038  | 
            #   2) Re-uses cached strings, which helps in future set and
 | 
|
2039  | 
            #      equality checks.
 | 
|
2040  | 
            # (2) is enough that removing encoding entirely along with
 | 
|
2041  | 
            # the cache (so we are using plain strings) results in no
 | 
|
2042  | 
            # performance improvement.
 | 
|
2043  | 
try:  | 
|
2044  | 
revision_id = unescape_revid_cache[revision_id]  | 
|
2045  | 
except KeyError:  | 
|
2046  | 
unescaped = unescape(revision_id)  | 
|
2047  | 
unescape_revid_cache[revision_id] = unescaped  | 
|
2048  | 
revision_id = unescaped  | 
|
2049  | 
||
| 
2988.2.2
by Robert Collins
 Review feedback.  | 
2050  | 
            # Note that unconditionally unescaping means that we deserialise
 | 
2051  | 
            # every fileid, which for general 'pull' is not great, but we don't
 | 
|
2052  | 
            # really want to have some many fulltexts that this matters anyway.
 | 
|
2053  | 
            # RBC 20071114.
 | 
|
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2054  | 
try:  | 
2055  | 
file_id = unescape_fileid_cache[file_id]  | 
|
2056  | 
except KeyError:  | 
|
2057  | 
unescaped = unescape(file_id)  | 
|
2058  | 
unescape_fileid_cache[file_id] = unescaped  | 
|
2059  | 
file_id = unescaped  | 
|
2060  | 
||
2061  | 
key = (file_id, revision_id)  | 
|
2062  | 
setdefault(key, False)  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2063  | 
if revision_id == line_key[-1]:  | 
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2064  | 
result[key] = True  | 
2065  | 
return result  | 
|
2066  | 
||
| 
3735.2.135
by Robert Collins
 Permit fetching bzr.dev [deal with inconsistent inventories.]  | 
2067  | 
def _inventory_xml_lines_for_keys(self, keys):  | 
2068  | 
"""Get a line iterator of the sort needed for findind references.  | 
|
2069  | 
||
2070  | 
        Not relevant for non-xml inventory repositories.
 | 
|
2071  | 
||
2072  | 
        Ghosts in revision_keys are ignored.
 | 
|
2073  | 
||
2074  | 
        :param revision_keys: The revision keys for the inventories to inspect.
 | 
|
2075  | 
        :return: An iterator over (inventory line, revid) for the fulltexts of
 | 
|
2076  | 
            all of the xml inventories specified by revision_keys.
 | 
|
2077  | 
        """
 | 
|
2078  | 
stream = self.inventories.get_record_stream(keys, 'unordered', True)  | 
|
2079  | 
for record in stream:  | 
|
2080  | 
if record.storage_kind != 'absent':  | 
|
2081  | 
chunks = record.get_bytes_as('chunked')  | 
|
| 
3735.2.136
by Robert Collins
 Fix typo  | 
2082  | 
revid = record.key[-1]  | 
| 
3735.2.135
by Robert Collins
 Permit fetching bzr.dev [deal with inconsistent inventories.]  | 
2083  | 
lines = osutils.chunks_to_lines(chunks)  | 
2084  | 
for line in lines:  | 
|
2085  | 
yield line, revid  | 
|
2086  | 
||
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2087  | 
def _find_file_ids_from_xml_inventory_lines(self, line_iterator,  | 
| 
4360.4.12
by John Arbash Meinel
 Work out some issues with revision_ids vs revision_keys.  | 
2088  | 
revision_keys):  | 
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2089  | 
"""Helper routine for fileids_altered_by_revision_ids.  | 
2090  | 
||
2091  | 
        This performs the translation of xml lines to revision ids.
 | 
|
2092  | 
||
2093  | 
        :param line_iterator: An iterator of lines, origin_version_id
 | 
|
| 
4360.4.12
by John Arbash Meinel
 Work out some issues with revision_ids vs revision_keys.  | 
2094  | 
        :param revision_keys: The revision ids to filter for. This should be a
 | 
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2095  | 
            set or other type which supports efficient __contains__ lookups, as
 | 
| 
4360.4.12
by John Arbash Meinel
 Work out some issues with revision_ids vs revision_keys.  | 
2096  | 
            the revision key from each parsed line will be looked up in the
 | 
2097  | 
            revision_keys filter.
 | 
|
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2098  | 
        :return: a dictionary mapping altered file-ids to an iterable of
 | 
2099  | 
        revision_ids. Each altered file-ids has the exact revision_ids that
 | 
|
2100  | 
        altered it listed explicitly.
 | 
|
2101  | 
        """
 | 
|
| 
3735.2.135
by Robert Collins
 Permit fetching bzr.dev [deal with inconsistent inventories.]  | 
2102  | 
seen = set(self._find_text_key_references_from_xml_inventory_lines(  | 
2103  | 
line_iterator).iterkeys())  | 
|
| 
4360.4.12
by John Arbash Meinel
 Work out some issues with revision_ids vs revision_keys.  | 
2104  | 
parent_keys = self._find_parent_keys_of_revisions(revision_keys)  | 
| 
3735.2.135
by Robert Collins
 Permit fetching bzr.dev [deal with inconsistent inventories.]  | 
2105  | 
parent_seen = set(self._find_text_key_references_from_xml_inventory_lines(  | 
| 
4360.4.12
by John Arbash Meinel
 Work out some issues with revision_ids vs revision_keys.  | 
2106  | 
self._inventory_xml_lines_for_keys(parent_keys)))  | 
| 
3735.2.135
by Robert Collins
 Permit fetching bzr.dev [deal with inconsistent inventories.]  | 
2107  | 
new_keys = seen - parent_seen  | 
| 
2988.1.1
by Robert Collins
 Refactor fetch's xml inventory parsing into a core routine that extracts the data and a separate one that filters for fetch.  | 
2108  | 
result = {}  | 
2109  | 
setdefault = result.setdefault  | 
|
| 
3735.2.135
by Robert Collins
 Permit fetching bzr.dev [deal with inconsistent inventories.]  | 
2110  | 
for key in new_keys:  | 
2111  | 
setdefault(key[0], set()).add(key[-1])  | 
|
| 
2592.3.110
by Robert Collins
 Filter out texts and signatures not referenced by the revisions being copied during pack to pack fetching.  | 
2112  | 
return result  | 
2113  | 
||
| 
4360.4.10
by John Arbash Meinel
 Remove some of the code duplication.  | 
2114  | 
def _find_parent_ids_of_revisions(self, revision_ids):  | 
2115  | 
"""Find all parent ids that are mentioned in the revision graph.  | 
|
2116  | 
||
2117  | 
        :return: set of revisions that are parents of revision_ids which are
 | 
|
2118  | 
            not part of revision_ids themselves
 | 
|
2119  | 
        """
 | 
|
2120  | 
parent_map = self.get_parent_map(revision_ids)  | 
|
| 
4360.4.12
by John Arbash Meinel
 Work out some issues with revision_ids vs revision_keys.  | 
2121  | 
parent_ids = set()  | 
2122  | 
map(parent_ids.update, parent_map.itervalues())  | 
|
2123  | 
parent_ids.difference_update(revision_ids)  | 
|
2124  | 
parent_ids.discard(_mod_revision.NULL_REVISION)  | 
|
2125  | 
return parent_ids  | 
|
2126  | 
||
2127  | 
def _find_parent_keys_of_revisions(self, revision_keys):  | 
|
2128  | 
"""Similar to _find_parent_ids_of_revisions, but used with keys.  | 
|
2129  | 
||
2130  | 
        :param revision_keys: An iterable of revision_keys.
 | 
|
2131  | 
        :return: The parents of all revision_keys that are not already in
 | 
|
2132  | 
            revision_keys
 | 
|
2133  | 
        """
 | 
|
2134  | 
parent_map = self.revisions.get_parent_map(revision_keys)  | 
|
2135  | 
parent_keys = set()  | 
|
2136  | 
map(parent_keys.update, parent_map.itervalues())  | 
|
2137  | 
parent_keys.difference_update(revision_keys)  | 
|
2138  | 
parent_keys.discard(_mod_revision.NULL_REVISION)  | 
|
2139  | 
return parent_keys  | 
|
| 
4360.4.10
by John Arbash Meinel
 Remove some of the code duplication.  | 
2140  | 
|
| 
3422.1.1
by John Arbash Meinel
 merge in bzr-1.5rc1, revert the transaction cache change  | 
2141  | 
def fileids_altered_by_revision_ids(self, revision_ids, _inv_weave=None):  | 
| 
2592.3.110
by Robert Collins
 Filter out texts and signatures not referenced by the revisions being copied during pack to pack fetching.  | 
2142  | 
"""Find the file ids and versions affected by revisions.  | 
2143  | 
||
2144  | 
        :param revisions: an iterable containing revision ids.
 | 
|
| 
3422.1.1
by John Arbash Meinel
 merge in bzr-1.5rc1, revert the transaction cache change  | 
2145  | 
        :param _inv_weave: The inventory weave from this repository or None.
 | 
2146  | 
            If None, the inventory weave will be opened automatically.
 | 
|
| 
2592.3.110
by Robert Collins
 Filter out texts and signatures not referenced by the revisions being copied during pack to pack fetching.  | 
2147  | 
        :return: a dictionary mapping altered file-ids to an iterable of
 | 
2148  | 
        revision_ids. Each altered file-ids has the exact revision_ids that
 | 
|
2149  | 
        altered it listed explicitly.
 | 
|
2150  | 
        """
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2151  | 
selected_keys = set((revid,) for revid in revision_ids)  | 
2152  | 
w = _inv_weave or self.inventories  | 
|
| 
4961.2.18
by Martin Pool
 Remove more pb-passing  | 
2153  | 
return self._find_file_ids_from_xml_inventory_lines(  | 
2154  | 
w.iter_lines_added_or_present_in_keys(  | 
|
2155  | 
selected_keys, pb=None),  | 
|
2156  | 
selected_keys)  | 
|
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
2157  | 
|
| 
2708.1.7
by Aaron Bentley
 Rename extract_files_bytes to iter_files_bytes  | 
2158  | 
def iter_files_bytes(self, desired_files):  | 
| 
2708.1.9
by Aaron Bentley
 Clean-up docs and imports  | 
2159  | 
"""Iterate through file versions.  | 
2160  | 
||
| 
2708.1.10
by Aaron Bentley
 Update docstrings  | 
2161  | 
        Files will not necessarily be returned in the order they occur in
 | 
2162  | 
        desired_files.  No specific order is guaranteed.
 | 
|
2163  | 
||
| 
2708.1.9
by Aaron Bentley
 Clean-up docs and imports  | 
2164  | 
        Yields pairs of identifier, bytes_iterator.  identifier is an opaque
 | 
| 
2708.1.10
by Aaron Bentley
 Update docstrings  | 
2165  | 
        value supplied by the caller as part of desired_files.  It should
 | 
2166  | 
        uniquely identify the file version in the caller's context.  (Examples:
 | 
|
2167  | 
        an index number or a TreeTransform trans_id.)
 | 
|
2168  | 
||
2169  | 
        bytes_iterator is an iterable of bytestrings for the file.  The
 | 
|
2170  | 
        kind of iterable and length of the bytestrings are unspecified, but for
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2171  | 
        this implementation, it is a list of bytes produced by
 | 
2172  | 
        VersionedFile.get_record_stream().
 | 
|
| 
2708.1.10
by Aaron Bentley
 Update docstrings  | 
2173  | 
|
| 
2708.1.9
by Aaron Bentley
 Clean-up docs and imports  | 
2174  | 
        :param desired_files: a list of (file_id, revision_id, identifier)
 | 
| 
2708.1.10
by Aaron Bentley
 Update docstrings  | 
2175  | 
            triples
 | 
| 
2708.1.9
by Aaron Bentley
 Clean-up docs and imports  | 
2176  | 
        """
 | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2177  | 
text_keys = {}  | 
| 
2708.1.3
by Aaron Bentley
 Implement extract_files_bytes on Repository  | 
2178  | 
for file_id, revision_id, callable_data in desired_files:  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2179  | 
text_keys[(file_id, revision_id)] = callable_data  | 
2180  | 
for record in self.texts.get_record_stream(text_keys, 'unordered', True):  | 
|
2181  | 
if record.storage_kind == 'absent':  | 
|
2182  | 
raise errors.RevisionNotPresent(record.key, self)  | 
|
| 
4202.1.1
by John Arbash Meinel
 Update Repository.iter_files_bytes() to return an iterable of bytestrings.  | 
2183  | 
yield text_keys[record.key], record.get_bytes_as('chunked')  | 
| 
2708.1.3
by Aaron Bentley
 Implement extract_files_bytes on Repository  | 
2184  | 
|
| 
3063.2.1
by Robert Collins
 Solve reconciling erroring when multiple portions of a single delta chain are being reinserted.  | 
2185  | 
def _generate_text_key_index(self, text_key_references=None,  | 
2186  | 
ancestors=None):  | 
|
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2187  | 
"""Generate a new text key index for the repository.  | 
2188  | 
||
2189  | 
        This is an expensive function that will take considerable time to run.
 | 
|
2190  | 
||
2191  | 
        :return: A dict mapping text keys ((file_id, revision_id) tuples) to a
 | 
|
2192  | 
            list of parents, also text keys. When a given key has no parents,
 | 
|
2193  | 
            the parents list will be [NULL_REVISION].
 | 
|
2194  | 
        """
 | 
|
2195  | 
        # All revisions, to find inventory parents.
 | 
|
| 
3063.2.1
by Robert Collins
 Solve reconciling erroring when multiple portions of a single delta chain are being reinserted.  | 
2196  | 
if ancestors is None:  | 
| 
3287.6.1
by Robert Collins
 * ``VersionedFile.get_graph`` is deprecated, with no replacement method.  | 
2197  | 
graph = self.get_graph()  | 
2198  | 
ancestors = graph.get_parent_map(self.all_revision_ids())  | 
|
| 
2951.2.9
by Robert Collins
 * ``pack-0.92`` repositories can now be reconciled.  | 
2199  | 
if text_key_references is None:  | 
2200  | 
text_key_references = self.find_text_key_references()  | 
|
| 
2988.3.1
by Robert Collins
 Handle the progress bar in _generate_text_key_index correctly.  | 
2201  | 
pb = ui.ui_factory.nested_progress_bar()  | 
2202  | 
try:  | 
|
2203  | 
return self._do_generate_text_key_index(ancestors,  | 
|
2204  | 
text_key_references, pb)  | 
|
2205  | 
finally:  | 
|
2206  | 
pb.finished()  | 
|
2207  | 
||
2208  | 
def _do_generate_text_key_index(self, ancestors, text_key_references, pb):  | 
|
2209  | 
"""Helper for _generate_text_key_index to avoid deep nesting."""  | 
|
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2210  | 
revision_order = tsort.topo_sort(ancestors)  | 
2211  | 
invalid_keys = set()  | 
|
2212  | 
revision_keys = {}  | 
|
2213  | 
for revision_id in revision_order:  | 
|
2214  | 
revision_keys[revision_id] = set()  | 
|
2215  | 
text_count = len(text_key_references)  | 
|
2216  | 
        # a cache of the text keys to allow reuse; costs a dict of all the
 | 
|
2217  | 
        # keys, but saves a 2-tuple for every child of a given key.
 | 
|
2218  | 
text_key_cache = {}  | 
|
2219  | 
for text_key, valid in text_key_references.iteritems():  | 
|
2220  | 
if not valid:  | 
|
2221  | 
invalid_keys.add(text_key)  | 
|
2222  | 
else:  | 
|
2223  | 
revision_keys[text_key[1]].add(text_key)  | 
|
2224  | 
text_key_cache[text_key] = text_key  | 
|
2225  | 
del text_key_references  | 
|
2226  | 
text_index = {}  | 
|
2227  | 
text_graph = graph.Graph(graph.DictParentsProvider(text_index))  | 
|
2228  | 
NULL_REVISION = _mod_revision.NULL_REVISION  | 
|
| 
2988.1.5
by Robert Collins
 Use a LRU cache when generating the text index to reduce inventory deserialisations.  | 
2229  | 
        # Set a cache with a size of 10 - this suffices for bzr.dev but may be
 | 
2230  | 
        # too small for large or very branchy trees. However, for 55K path
 | 
|
2231  | 
        # trees, it would be easy to use too much memory trivially. Ideally we
 | 
|
2232  | 
        # could gauge this by looking at available real memory etc, but this is
 | 
|
2233  | 
        # always a tricky proposition.
 | 
|
2234  | 
inventory_cache = lru_cache.LRUCache(10)  | 
|
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2235  | 
batch_size = 10 # should be ~150MB on a 55K path tree  | 
2236  | 
batch_count = len(revision_order) / batch_size + 1  | 
|
2237  | 
processed_texts = 0  | 
|
| 
4103.3.2
by Martin Pool
 Remove trailing punctuation from progress messages  | 
2238  | 
pb.update("Calculating text parents", processed_texts, text_count)  | 
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2239  | 
for offset in xrange(batch_count):  | 
2240  | 
to_query = revision_order[offset * batch_size:(offset + 1) *  | 
|
2241  | 
batch_size]  | 
|
2242  | 
if not to_query:  | 
|
2243  | 
                break
 | 
|
| 
4332.3.14
by Robert Collins
 Remove some unnecessary revision tree access in reconcile and check.  | 
2244  | 
for revision_id in to_query:  | 
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2245  | 
parent_ids = ancestors[revision_id]  | 
2246  | 
for text_key in revision_keys[revision_id]:  | 
|
| 
4103.3.2
by Martin Pool
 Remove trailing punctuation from progress messages  | 
2247  | 
pb.update("Calculating text parents", processed_texts)  | 
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2248  | 
processed_texts += 1  | 
2249  | 
candidate_parents = []  | 
|
2250  | 
for parent_id in parent_ids:  | 
|
2251  | 
parent_text_key = (text_key[0], parent_id)  | 
|
2252  | 
try:  | 
|
2253  | 
check_parent = parent_text_key not in \  | 
|
2254  | 
revision_keys[parent_id]  | 
|
2255  | 
except KeyError:  | 
|
2256  | 
                            # the parent parent_id is a ghost:
 | 
|
2257  | 
check_parent = False  | 
|
2258  | 
                            # truncate the derived graph against this ghost.
 | 
|
2259  | 
parent_text_key = None  | 
|
2260  | 
if check_parent:  | 
|
2261  | 
                            # look at the parent commit details inventories to
 | 
|
2262  | 
                            # determine possible candidates in the per file graph.
 | 
|
2263  | 
                            # TODO: cache here.
 | 
|
| 
2988.1.5
by Robert Collins
 Use a LRU cache when generating the text index to reduce inventory deserialisations.  | 
2264  | 
try:  | 
2265  | 
inv = inventory_cache[parent_id]  | 
|
2266  | 
except KeyError:  | 
|
2267  | 
inv = self.revision_tree(parent_id).inventory  | 
|
2268  | 
inventory_cache[parent_id] = inv  | 
|
| 
3735.2.9
by Robert Collins
 Get a working chk_map using inventory implementation bootstrapped.  | 
2269  | 
try:  | 
2270  | 
parent_entry = inv[text_key[0]]  | 
|
2271  | 
except (KeyError, errors.NoSuchId):  | 
|
2272  | 
parent_entry = None  | 
|
| 
2988.1.3
by Robert Collins
 Add a new repositoy method _generate_text_key_index for use by reconcile/check.  | 
2273  | 
if parent_entry is not None:  | 
2274  | 
parent_text_key = (  | 
|
2275  | 
text_key[0], parent_entry.revision)  | 
|
2276  | 
else:  | 
|
2277  | 
parent_text_key = None  | 
|
2278  | 
if parent_text_key is not None:  | 
|
2279  | 
candidate_parents.append(  | 
|
2280  | 
text_key_cache[parent_text_key])  | 
|
2281  | 
parent_heads = text_graph.heads(candidate_parents)  | 
|
2282  | 
new_parents = list(parent_heads)  | 
|
2283  | 
new_parents.sort(key=lambda x:candidate_parents.index(x))  | 
|
2284  | 
if new_parents == []:  | 
|
2285  | 
new_parents = [NULL_REVISION]  | 
|
2286  | 
text_index[text_key] = new_parents  | 
|
2287  | 
||
2288  | 
for text_key in invalid_keys:  | 
|
2289  | 
text_index[text_key] = [NULL_REVISION]  | 
|
2290  | 
return text_index  | 
|
2291  | 
||
| 
2668.2.8
by Andrew Bennetts
 Rename get_data_to_fetch_for_revision_ids as item_keys_introduced_by.  | 
2292  | 
def item_keys_introduced_by(self, revision_ids, _files_pb=None):  | 
2293  | 
"""Get an iterable listing the keys of all the data introduced by a set  | 
|
2294  | 
        of revision IDs.
 | 
|
2295  | 
||
2296  | 
        The keys will be ordered so that the corresponding items can be safely
 | 
|
2297  | 
        fetched and inserted in that order.
 | 
|
2298  | 
||
2299  | 
        :returns: An iterable producing tuples of (knit-kind, file-id,
 | 
|
2300  | 
            versions).  knit-kind is one of 'file', 'inventory', 'signatures',
 | 
|
2301  | 
            'revisions'.  file-id is None unless knit-kind is 'file'.
 | 
|
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2302  | 
        """
 | 
| 
3735.4.4
by Andrew Bennetts
 Change the layering, to put the custom file_id list underneath item_keys_intoduced_by  | 
2303  | 
for result in self._find_file_keys_to_fetch(revision_ids, _files_pb):  | 
2304  | 
yield result  | 
|
2305  | 
del _files_pb  | 
|
2306  | 
for result in self._find_non_file_keys_to_fetch(revision_ids):  | 
|
2307  | 
yield result  | 
|
2308  | 
||
2309  | 
def _find_file_keys_to_fetch(self, revision_ids, pb):  | 
|
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2310  | 
        # XXX: it's a bit weird to control the inventory weave caching in this
 | 
| 
2535.3.7
by Andrew Bennetts
 Remove now unused _fetch_weave_texts, make progress reporting closer to how it was before I refactored __fetch.  | 
2311  | 
        # generator.  Ideally the caching would be done in fetch.py I think.  Or
 | 
2312  | 
        # maybe this generator should explicitly have the contract that it
 | 
|
2313  | 
        # should not be iterated until the previously yielded item has been
 | 
|
2314  | 
        # processed?
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2315  | 
inv_w = self.inventories  | 
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2316  | 
|
2317  | 
        # file ids that changed
 | 
|
| 
3422.1.1
by John Arbash Meinel
 merge in bzr-1.5rc1, revert the transaction cache change  | 
2318  | 
file_ids = self.fileids_altered_by_revision_ids(revision_ids, inv_w)  | 
| 
2535.3.8
by Andrew Bennetts
 Unbreak progress reporting.  | 
2319  | 
count = 0  | 
2320  | 
num_file_ids = len(file_ids)  | 
|
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2321  | 
for file_id, altered_versions in file_ids.iteritems():  | 
| 
3735.4.4
by Andrew Bennetts
 Change the layering, to put the custom file_id list underneath item_keys_intoduced_by  | 
2322  | 
if pb is not None:  | 
| 
4665.2.1
by Martin Pool
 Update some progress messages to the standard style  | 
2323  | 
pb.update("Fetch texts", count, num_file_ids)  | 
| 
2535.3.8
by Andrew Bennetts
 Unbreak progress reporting.  | 
2324  | 
count += 1  | 
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2325  | 
yield ("file", file_id, altered_versions)  | 
2326  | 
||
| 
3735.4.4
by Andrew Bennetts
 Change the layering, to put the custom file_id list underneath item_keys_intoduced_by  | 
2327  | 
def _find_non_file_keys_to_fetch(self, revision_ids):  | 
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2328  | 
        # inventory
 | 
2329  | 
yield ("inventory", None, revision_ids)  | 
|
2330  | 
||
2331  | 
        # signatures
 | 
|
| 
3825.5.2
by Andrew Bennetts
 Ensure that item_keys_introduced_by returns the  | 
2332  | 
        # XXX: Note ATM no callers actually pay attention to this return
 | 
2333  | 
        #      instead they just use the list of revision ids and ignore
 | 
|
2334  | 
        #      missing sigs. Consider removing this work entirely
 | 
|
2335  | 
revisions_with_signatures = set(self.signatures.get_parent_map(  | 
|
2336  | 
[(r,) for r in revision_ids]))  | 
|
| 
3825.5.1
by Andrew Bennetts
 Improve determining signatures to transfer in item_keys_introduced_by.  | 
2337  | 
revisions_with_signatures = set(  | 
| 
3825.5.2
by Andrew Bennetts
 Ensure that item_keys_introduced_by returns the  | 
2338  | 
[r for (r,) in revisions_with_signatures])  | 
| 
3825.5.1
by Andrew Bennetts
 Improve determining signatures to transfer in item_keys_introduced_by.  | 
2339  | 
revisions_with_signatures.intersection_update(revision_ids)  | 
| 
2535.3.25
by Andrew Bennetts
 Fetch signatures too.  | 
2340  | 
yield ("signatures", None, revisions_with_signatures)  | 
| 
2535.3.6
by Andrew Bennetts
 Move some "what repo data to fetch logic" from RepoFetcher to Repository.  | 
2341  | 
|
2342  | 
        # revisions
 | 
|
2343  | 
yield ("revisions", None, revision_ids)  | 
|
2344  | 
||
| 
1185.65.27
by Robert Collins
 Tweak storage towards mergability.  | 
2345  | 
    @needs_read_lock
 | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2346  | 
def get_inventory(self, revision_id):  | 
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
2347  | 
"""Get Inventory object by revision id."""  | 
2348  | 
return self.iter_inventories([revision_id]).next()  | 
|
2349  | 
||
| 
4476.3.68
by Andrew Bennetts
 Review comments from John.  | 
2350  | 
def iter_inventories(self, revision_ids, ordering=None):  | 
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
2351  | 
"""Get many inventories by revision_ids.  | 
2352  | 
||
2353  | 
        This will buffer some or all of the texts used in constructing the
 | 
|
2354  | 
        inventories in memory, but will only parse a single inventory at a
 | 
|
2355  | 
        time.
 | 
|
2356  | 
||
| 
4202.2.1
by Ian Clatworthy
 get directory logging working again  | 
2357  | 
        :param revision_ids: The expected revision ids of the inventories.
 | 
| 
4476.3.68
by Andrew Bennetts
 Review comments from John.  | 
2358  | 
        :param ordering: optional ordering, e.g. 'topological'.  If not
 | 
2359  | 
            specified, the order of revision_ids will be preserved (by
 | 
|
2360  | 
            buffering if necessary).
 | 
|
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
2361  | 
        :return: An iterator of inventories.
 | 
2362  | 
        """
 | 
|
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
2363  | 
if ((None in revision_ids)  | 
2364  | 
or (_mod_revision.NULL_REVISION in revision_ids)):  | 
|
2365  | 
raise ValueError('cannot get null revision inventory')  | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
2366  | 
return self._iter_inventories(revision_ids, ordering)  | 
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
2367  | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
2368  | 
def _iter_inventories(self, revision_ids, ordering):  | 
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
2369  | 
"""single-document based inventory iteration."""  | 
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
2370  | 
inv_xmls = self._iter_inventory_xmls(revision_ids, ordering)  | 
2371  | 
for text, revision_id in inv_xmls:  | 
|
| 
4988.3.3
by Jelmer Vernooij
 rename Repository.deserialise_inventory to Repository._deserialise_inventory.  | 
2372  | 
yield self._deserialise_inventory(revision_id, text)  | 
| 
1740.2.3
by Aaron Bentley
 Only reserialize the working tree basis inventory when needed.  | 
2373  | 
|
| 
4476.3.68
by Andrew Bennetts
 Review comments from John.  | 
2374  | 
def _iter_inventory_xmls(self, revision_ids, ordering):  | 
2375  | 
if ordering is None:  | 
|
2376  | 
order_as_requested = True  | 
|
2377  | 
ordering = 'unordered'  | 
|
2378  | 
else:  | 
|
2379  | 
order_as_requested = False  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2380  | 
keys = [(revision_id,) for revision_id in revision_ids]  | 
| 
4476.4.1
by John Arbash Meinel
 Change Repository._iter_inventory_xmls to avoid buffering *everything*.  | 
2381  | 
if not keys:  | 
2382  | 
            return
 | 
|
| 
4476.3.68
by Andrew Bennetts
 Review comments from John.  | 
2383  | 
if order_as_requested:  | 
2384  | 
key_iter = iter(keys)  | 
|
2385  | 
next_key = key_iter.next()  | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
2386  | 
stream = self.inventories.get_record_stream(keys, ordering, True)  | 
| 
3890.2.3
by John Arbash Meinel
 Use the 'chunked' interface to keep memory consumption minimal during revision_trees()  | 
2387  | 
text_chunks = {}  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2388  | 
for record in stream:  | 
2389  | 
if record.storage_kind != 'absent':  | 
|
| 
4476.3.68
by Andrew Bennetts
 Review comments from John.  | 
2390  | 
chunks = record.get_bytes_as('chunked')  | 
2391  | 
if order_as_requested:  | 
|
2392  | 
text_chunks[record.key] = chunks  | 
|
2393  | 
else:  | 
|
2394  | 
yield ''.join(chunks), record.key[-1]  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2395  | 
else:  | 
2396  | 
raise errors.NoSuchRevision(self, record.key)  | 
|
| 
4476.3.68
by Andrew Bennetts
 Review comments from John.  | 
2397  | 
if order_as_requested:  | 
2398  | 
                # Yield as many results as we can while preserving order.
 | 
|
2399  | 
while next_key in text_chunks:  | 
|
2400  | 
chunks = text_chunks.pop(next_key)  | 
|
2401  | 
yield ''.join(chunks), next_key[-1]  | 
|
2402  | 
try:  | 
|
2403  | 
next_key = key_iter.next()  | 
|
2404  | 
except StopIteration:  | 
|
2405  | 
                        # We still want to fully consume the get_record_stream,
 | 
|
2406  | 
                        # just in case it is not actually finished at this point
 | 
|
2407  | 
next_key = None  | 
|
2408  | 
                        break
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2409  | 
|
| 
4988.3.3
by Jelmer Vernooij
 rename Repository.deserialise_inventory to Repository._deserialise_inventory.  | 
2410  | 
def _deserialise_inventory(self, revision_id, xml):  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2411  | 
"""Transform the xml into an inventory object.  | 
| 
1740.2.3
by Aaron Bentley
 Only reserialize the working tree basis inventory when needed.  | 
2412  | 
|
2413  | 
        :param revision_id: The expected revision id of the inventory.
 | 
|
2414  | 
        :param xml: A serialised inventory.
 | 
|
2415  | 
        """
 | 
|
| 
3882.6.23
by John Arbash Meinel
 Change the XMLSerializer.read_inventory_from_string api.  | 
2416  | 
result = self._serializer.read_inventory_from_string(xml, revision_id,  | 
| 
4849.4.2
by John Arbash Meinel
 Change from being a per-serializer attribute to being a per-repo attribute.  | 
2417  | 
entry_cache=self._inventory_entry_cache,  | 
2418  | 
return_from_cache=self._safe_to_return_from_cache)  | 
|
| 
3169.2.3
by Robert Collins
 Use an if, not an assert, as we test with -O.  | 
2419  | 
if result.revision_id != revision_id:  | 
2420  | 
raise AssertionError('revision id mismatch %s != %s' % (  | 
|
2421  | 
result.revision_id, revision_id))  | 
|
| 
3169.2.2
by Robert Collins
 Add a test to Repository.deserialise_inventory that the resulting ivnentory is the one asked for, and update relevant tests. Also tweak the model 1 to 2 regenerate inventories logic to use the revision trees parent marker which is more accurate in some cases.  | 
2422  | 
return result  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2423  | 
|
| 
2520.4.113
by Aaron Bentley
 Avoid peeking at Repository._serializer  | 
2424  | 
def get_serializer_format(self):  | 
2425  | 
return self._serializer.format_num  | 
|
2426  | 
||
| 
1185.65.27
by Robert Collins
 Tweak storage towards mergability.  | 
2427  | 
    @needs_read_lock
 | 
| 
4988.5.1
by Jelmer Vernooij
 Rename Repository.get_inventory_xml -> Repository._get_inventory_xml.  | 
2428  | 
def _get_inventory_xml(self, revision_id):  | 
| 
4988.5.2
by Jelmer Vernooij
 Fix docstring.  | 
2429  | 
"""Get serialized inventory as a string."""  | 
| 
4476.3.4
by Andrew Bennetts
 Network serialisation, and most tests passing with InterDifferingSerializer commented out.  | 
2430  | 
texts = self._iter_inventory_xmls([revision_id], 'unordered')  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2431  | 
try:  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2432  | 
text, revision_id = texts.next()  | 
2433  | 
except StopIteration:  | 
|
| 
1773.4.1
by Martin Pool
 Add pyflakes makefile target; fix many warnings  | 
2434  | 
raise errors.HistoryMissing(self, 'inventory', revision_id)  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2435  | 
return text  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2436  | 
|
| 
4419.2.5
by Andrew Bennetts
 Add Repository.get_rev_id_for_revno, and use it both as the _ensure_real fallback and as the server-side implementation.  | 
2437  | 
def get_rev_id_for_revno(self, revno, known_pair):  | 
2438  | 
"""Return the revision id of a revno, given a later (revno, revid)  | 
|
2439  | 
        pair in the same history.
 | 
|
2440  | 
||
2441  | 
        :return: if found (True, revid).  If the available history ran out
 | 
|
2442  | 
            before reaching the revno, then this returns
 | 
|
2443  | 
            (False, (closest_revno, closest_revid)).
 | 
|
2444  | 
        """
 | 
|
2445  | 
known_revno, known_revid = known_pair  | 
|
2446  | 
partial_history = [known_revid]  | 
|
2447  | 
distance_from_known = known_revno - revno  | 
|
2448  | 
if distance_from_known < 0:  | 
|
2449  | 
raise ValueError(  | 
|
2450  | 
'requested revno (%d) is later than given known revno (%d)'  | 
|
2451  | 
% (revno, known_revno))  | 
|
2452  | 
try:  | 
|
| 
4419.2.9
by Andrew Bennetts
 Add per_repository_reference test for get_rev_id_for_revno, fix the bugs it revealed.  | 
2453  | 
_iter_for_revno(  | 
2454  | 
self, partial_history, stop_index=distance_from_known)  | 
|
| 
4419.2.5
by Andrew Bennetts
 Add Repository.get_rev_id_for_revno, and use it both as the _ensure_real fallback and as the server-side implementation.  | 
2455  | 
except errors.RevisionNotPresent, err:  | 
| 
4419.2.6
by Andrew Bennetts
 Add tests for server-side logic, and fix the bugs exposed by those tests.  | 
2456  | 
if err.revision_id == known_revid:  | 
2457  | 
                # The start revision (known_revid) wasn't found.
 | 
|
2458  | 
                raise
 | 
|
2459  | 
            # This is a stacked repository with no fallbacks, or a there's a
 | 
|
2460  | 
            # left-hand ghost.  Either way, even though the revision named in
 | 
|
2461  | 
            # the error isn't in this repo, we know it's the next step in this
 | 
|
2462  | 
            # left-hand history.
 | 
|
| 
4419.2.5
by Andrew Bennetts
 Add Repository.get_rev_id_for_revno, and use it both as the _ensure_real fallback and as the server-side implementation.  | 
2463  | 
partial_history.append(err.revision_id)  | 
| 
4419.2.6
by Andrew Bennetts
 Add tests for server-side logic, and fix the bugs exposed by those tests.  | 
2464  | 
if len(partial_history) <= distance_from_known:  | 
2465  | 
            # Didn't find enough history to get a revid for the revno.
 | 
|
2466  | 
earliest_revno = known_revno - len(partial_history) + 1  | 
|
| 
4419.2.5
by Andrew Bennetts
 Add Repository.get_rev_id_for_revno, and use it both as the _ensure_real fallback and as the server-side implementation.  | 
2467  | 
return (False, (earliest_revno, partial_history[-1]))  | 
| 
4419.2.6
by Andrew Bennetts
 Add tests for server-side logic, and fix the bugs exposed by those tests.  | 
2468  | 
if len(partial_history) - 1 > distance_from_known:  | 
| 
4419.2.5
by Andrew Bennetts
 Add Repository.get_rev_id_for_revno, and use it both as the _ensure_real fallback and as the server-side implementation.  | 
2469  | 
raise AssertionError('_iter_for_revno returned too much history')  | 
2470  | 
return (True, partial_history[-1])  | 
|
2471  | 
||
| 
2230.3.54
by Aaron Bentley
 Move reverse history iteration to repository  | 
2472  | 
def iter_reverse_revision_history(self, revision_id):  | 
2473  | 
"""Iterate backwards through revision ids in the lefthand history  | 
|
2474  | 
||
2475  | 
        :param revision_id: The revision id to start with.  All its lefthand
 | 
|
2476  | 
            ancestors will be traversed.
 | 
|
2477  | 
        """
 | 
|
| 
3287.5.2
by Robert Collins
 Deprecate VersionedFile.get_parents, breaking pulling from a ghost containing knit or pack repository to weaves, which improves correctness and allows simplification of core code.  | 
2478  | 
graph = self.get_graph()  | 
| 
2230.3.54
by Aaron Bentley
 Move reverse history iteration to repository  | 
2479  | 
next_id = revision_id  | 
2480  | 
while True:  | 
|
| 
3287.5.2
by Robert Collins
 Deprecate VersionedFile.get_parents, breaking pulling from a ghost containing knit or pack repository to weaves, which improves correctness and allows simplification of core code.  | 
2481  | 
if next_id in (None, _mod_revision.NULL_REVISION):  | 
2482  | 
                return
 | 
|
| 
4266.3.1
by Jelmer Vernooij
 Support cloning of branches with ghosts in the left hand side history.  | 
2483  | 
try:  | 
2484  | 
parents = graph.get_parent_map([next_id])[next_id]  | 
|
2485  | 
except KeyError:  | 
|
2486  | 
raise errors.RevisionNotPresent(next_id, self)  | 
|
| 
4266.3.10
by Jelmer Vernooij
 Remove no longer valid comment about catching KeyError.  | 
2487  | 
yield next_id  | 
| 
2230.3.54
by Aaron Bentley
 Move reverse history iteration to repository  | 
2488  | 
if len(parents) == 0:  | 
2489  | 
                return
 | 
|
2490  | 
else:  | 
|
2491  | 
next_id = parents[0]  | 
|
2492  | 
||
| 
1534.6.3
by Robert Collins
 find_repository sufficiently robust.  | 
2493  | 
def is_shared(self):  | 
2494  | 
"""Return True if this repository is flagged as a shared repository."""  | 
|
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2495  | 
raise NotImplementedError(self.is_shared)  | 
| 
1534.6.3
by Robert Collins
 find_repository sufficiently robust.  | 
2496  | 
|
| 
1594.2.7
by Robert Collins
 Add versionedfile.fix_parents api for correcting data post hoc.  | 
2497  | 
    @needs_write_lock
 | 
| 
1692.1.1
by Robert Collins
 * Repository.reconcile now takes a thorough keyword parameter to allow  | 
2498  | 
def reconcile(self, other=None, thorough=False):  | 
| 
1594.2.7
by Robert Collins
 Add versionedfile.fix_parents api for correcting data post hoc.  | 
2499  | 
"""Reconcile this repository."""  | 
2500  | 
from bzrlib.reconcile import RepoReconciler  | 
|
| 
1692.1.1
by Robert Collins
 * Repository.reconcile now takes a thorough keyword parameter to allow  | 
2501  | 
reconciler = RepoReconciler(self, thorough=thorough)  | 
| 
1594.2.7
by Robert Collins
 Add versionedfile.fix_parents api for correcting data post hoc.  | 
2502  | 
reconciler.reconcile()  | 
2503  | 
return reconciler  | 
|
| 
2440.1.1
by Martin Pool
 Add new Repository.sprout,  | 
2504  | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
2505  | 
def _refresh_data(self):  | 
2506  | 
"""Helper called from lock_* to ensure coherency with disk.  | 
|
2507  | 
||
2508  | 
        The default implementation does nothing; it is however possible
 | 
|
2509  | 
        for repositories to maintain loaded indices across multiple locks
 | 
|
2510  | 
        by checking inside their implementation of this method to see
 | 
|
2511  | 
        whether their indices are still valid. This depends of course on
 | 
|
| 
4145.1.2
by Robert Collins
 Add a refresh_data method on Repository allowing cleaner handling of insertions into RemoteRepository objects with _real_repository instances.  | 
2512  | 
        the disk format being validatable in this manner. This method is
 | 
2513  | 
        also called by the refresh_data() public interface to cause a refresh
 | 
|
2514  | 
        to occur while in a write lock so that data inserted by a smart server
 | 
|
2515  | 
        push operation is visible on the client's instance of the physical
 | 
|
2516  | 
        repository.
 | 
|
| 
2617.6.2
by Robert Collins
 Add abort_write_group and wire write_groups into fetch and commit.  | 
2517  | 
        """
 | 
2518  | 
||
| 
1534.6.3
by Robert Collins
 find_repository sufficiently robust.  | 
2519  | 
    @needs_read_lock
 | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2520  | 
def revision_tree(self, revision_id):  | 
2521  | 
"""Return Tree for a revision on this branch.  | 
|
2522  | 
||
| 
3668.5.2
by Jelmer Vernooij
 Fix docstring.  | 
2523  | 
        `revision_id` may be NULL_REVISION for the empty tree revision.
 | 
| 
1852.5.1
by Robert Collins
 Deprecate EmptyTree in favour of using Repository.revision_tree.  | 
2524  | 
        """
 | 
| 
3668.5.1
by Jelmer Vernooij
 Use NULL_REVISION rather than None for Repository.revision_tree().  | 
2525  | 
revision_id = _mod_revision.ensure_null(revision_id)  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2526  | 
        # TODO: refactor this to use an existing revision object
 | 
2527  | 
        # so we don't need to read it in twice.
 | 
|
| 
3668.5.1
by Jelmer Vernooij
 Use NULL_REVISION rather than None for Repository.revision_tree().  | 
2528  | 
if revision_id == _mod_revision.NULL_REVISION:  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2529  | 
return RevisionTree(self, Inventory(root_id=None),  | 
| 
1731.1.61
by Aaron Bentley
 Merge bzr.dev  | 
2530  | 
_mod_revision.NULL_REVISION)  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2531  | 
else:  | 
| 
5035.3.1
by Jelmer Vernooij
 Remove Repository.get_revision_inventory.  | 
2532  | 
inv = self.get_inventory(revision_id)  | 
| 
1185.65.17
by Robert Collins
 Merge from integration, mode-changes are broken.  | 
2533  | 
return RevisionTree(self, inv, revision_id)  | 
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2534  | 
|
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
2535  | 
def revision_trees(self, revision_ids):  | 
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
2536  | 
"""Return Trees for revisions in this repository.  | 
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
2537  | 
|
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
2538  | 
        :param revision_ids: a sequence of revision-ids;
 | 
2539  | 
          a revision-id may not be None or 'null:'
 | 
|
2540  | 
        """
 | 
|
| 
3169.2.1
by Robert Collins
 New method ``iter_inventories`` on Repository for access to many  | 
2541  | 
inventories = self.iter_inventories(revision_ids)  | 
2542  | 
for inv in inventories:  | 
|
2543  | 
yield RevisionTree(self, inv, inv.revision_id)  | 
|
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
2544  | 
|
| 
4137.3.2
by Ian Clatworthy
 Repository.get_deltas_for_revisions() now supports file-id filtering  | 
2545  | 
def _filtered_revision_trees(self, revision_ids, file_ids):  | 
2546  | 
"""Return Tree for a revision on this branch with only some files.  | 
|
2547  | 
||
2548  | 
        :param revision_ids: a sequence of revision-ids;
 | 
|
2549  | 
          a revision-id may not be None or 'null:'
 | 
|
2550  | 
        :param file_ids: if not None, the result is filtered
 | 
|
2551  | 
          so that only those file-ids, their parents and their
 | 
|
2552  | 
          children are included.
 | 
|
2553  | 
        """
 | 
|
2554  | 
inventories = self.iter_inventories(revision_ids)  | 
|
2555  | 
for inv in inventories:  | 
|
2556  | 
            # Should we introduce a FilteredRevisionTree class rather
 | 
|
2557  | 
            # than pre-filter the inventory here?
 | 
|
2558  | 
filtered_inv = inv.filter(file_ids)  | 
|
2559  | 
yield RevisionTree(self, filtered_inv, filtered_inv.revision_id)  | 
|
2560  | 
||
| 
1756.3.3
by Aaron Bentley
 More refactoring, introduce revision_trees.  | 
2561  | 
    @needs_read_lock
 | 
| 
2530.1.1
by Aaron Bentley
 Make topological sorting optional for get_ancestry  | 
2562  | 
def get_ancestry(self, revision_id, topo_sorted=True):  | 
| 
1185.66.2
by Aaron Bentley
 Moved get_ancestry to RevisionStorage  | 
2563  | 
"""Return a list of revision-ids integrated by a revision.  | 
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2564  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2565  | 
        The first element of the list is always None, indicating the origin
 | 
2566  | 
        revision.  This might change when we have history horizons, or
 | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2567  | 
        perhaps we should have a new API.
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2568  | 
|
| 
1185.66.2
by Aaron Bentley
 Moved get_ancestry to RevisionStorage  | 
2569  | 
        This is topologically sorted.
 | 
2570  | 
        """
 | 
|
| 
2598.5.1
by Aaron Bentley
 Start eliminating the use of None to indicate null revision  | 
2571  | 
if _mod_revision.is_null(revision_id):  | 
| 
1185.66.2
by Aaron Bentley
 Moved get_ancestry to RevisionStorage  | 
2572  | 
return [None]  | 
| 
1534.4.41
by Robert Collins
 Branch now uses BzrDir reasonably sanely.  | 
2573  | 
if not self.has_revision(revision_id):  | 
2574  | 
raise errors.NoSuchRevision(self, revision_id)  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2575  | 
graph = self.get_graph()  | 
2576  | 
keys = set()  | 
|
2577  | 
search = graph._make_breadth_first_searcher([revision_id])  | 
|
2578  | 
while True:  | 
|
2579  | 
try:  | 
|
2580  | 
found, ghosts = search.next_with_ghosts()  | 
|
2581  | 
except StopIteration:  | 
|
2582  | 
                break
 | 
|
2583  | 
keys.update(found)  | 
|
2584  | 
if _mod_revision.NULL_REVISION in keys:  | 
|
2585  | 
keys.remove(_mod_revision.NULL_REVISION)  | 
|
2586  | 
if topo_sorted:  | 
|
2587  | 
parent_map = graph.get_parent_map(keys)  | 
|
2588  | 
keys = tsort.topo_sort(parent_map)  | 
|
2589  | 
return [None] + list(keys)  | 
|
| 
1185.66.2
by Aaron Bentley
 Moved get_ancestry to RevisionStorage  | 
2590  | 
|
| 
5108.1.1
by Parth Malwankar
 initial support for 'pack --clean-obsolete-packs'. tested only manually.  | 
2591  | 
def pack(self, hint=None, clean_obsolete_packs=False):  | 
| 
2604.2.1
by Robert Collins
 (robertc) Introduce a pack command.  | 
2592  | 
"""Compress the data within the repository.  | 
2593  | 
||
2594  | 
        This operation only makes sense for some repository types. For other
 | 
|
2595  | 
        types it should be a no-op that just returns.
 | 
|
2596  | 
||
2597  | 
        This stub method does not require a lock, but subclasses should use
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2598  | 
        @needs_write_lock as this is a long running call its reasonable to
 | 
| 
2604.2.1
by Robert Collins
 (robertc) Introduce a pack command.  | 
2599  | 
        implicitly lock for the user.
 | 
| 
4431.3.7
by Jonathan Lange
 Cherrypick bzr.dev 4470, resolving conflicts.  | 
2600  | 
|
2601  | 
        :param hint: If not supplied, the whole repository is packed.
 | 
|
2602  | 
            If supplied, the repository may use the hint parameter as a
 | 
|
2603  | 
            hint for the parts of the repository to pack. A hint can be
 | 
|
2604  | 
            obtained from the result of commit_write_group(). Out of
 | 
|
2605  | 
            date hints are simply ignored, because concurrent operations
 | 
|
2606  | 
            can obsolete them rapidly.
 | 
|
| 
5108.1.1
by Parth Malwankar
 initial support for 'pack --clean-obsolete-packs'. tested only manually.  | 
2607  | 
|
2608  | 
        :param clean_obsolete_packs: Clean obsolete packs immediately after
 | 
|
2609  | 
            the pack operation.
 | 
|
| 
2604.2.1
by Robert Collins
 (robertc) Introduce a pack command.  | 
2610  | 
        """
 | 
2611  | 
||
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2612  | 
def get_transaction(self):  | 
2613  | 
return self.control_files.get_transaction()  | 
|
2614  | 
||
| 
3517.4.17
by Martin Pool
 Redo base Repository.get_parent_map to use .revisions graph  | 
2615  | 
def get_parent_map(self, revision_ids):  | 
| 
4379.3.3
by Gary van der Merwe
 Rename and add doc string for StackedParentsProvider.  | 
2616  | 
"""See graph.StackedParentsProvider.get_parent_map"""  | 
| 
3517.4.17
by Martin Pool
 Redo base Repository.get_parent_map to use .revisions graph  | 
2617  | 
        # revisions index works in keys; this just works in revisions
 | 
2618  | 
        # therefore wrap and unwrap
 | 
|
2619  | 
query_keys = []  | 
|
2620  | 
result = {}  | 
|
2621  | 
for revision_id in revision_ids:  | 
|
2622  | 
if revision_id == _mod_revision.NULL_REVISION:  | 
|
2623  | 
result[revision_id] = ()  | 
|
2624  | 
elif revision_id is None:  | 
|
| 
3373.5.2
by John Arbash Meinel
 Add repository_implementation tests for get_parent_map  | 
2625  | 
raise ValueError('get_parent_map(None) is not valid')  | 
| 
3517.4.17
by Martin Pool
 Redo base Repository.get_parent_map to use .revisions graph  | 
2626  | 
else:  | 
2627  | 
query_keys.append((revision_id ,))  | 
|
2628  | 
for ((revision_id,), parent_keys) in \  | 
|
2629  | 
self.revisions.get_parent_map(query_keys).iteritems():  | 
|
2630  | 
if parent_keys:  | 
|
| 
4819.2.1
by John Arbash Meinel
 Don't use a generator when a list expression is just fine  | 
2631  | 
result[revision_id] = tuple([parent_revid  | 
2632  | 
for (parent_revid,) in parent_keys])  | 
|
| 
3517.4.17
by Martin Pool
 Redo base Repository.get_parent_map to use .revisions graph  | 
2633  | 
else:  | 
2634  | 
result[revision_id] = (_mod_revision.NULL_REVISION,)  | 
|
2635  | 
return result  | 
|
| 
2490.2.13
by Aaron Bentley
 Update distinct -> lowest, refactor, add ParentsProvider concept  | 
2636  | 
|
2637  | 
def _make_parents_provider(self):  | 
|
2638  | 
return self  | 
|
2639  | 
||
| 
4913.4.2
by Jelmer Vernooij
 Add Repository.get_known_graph_ancestry.  | 
2640  | 
    @needs_read_lock
 | 
2641  | 
def get_known_graph_ancestry(self, revision_ids):  | 
|
2642  | 
"""Return the known graph for a set of revision ids and their ancestors.  | 
|
2643  | 
        """
 | 
|
2644  | 
st = static_tuple.StaticTuple  | 
|
2645  | 
revision_keys = [st(r_id).intern() for r_id in revision_ids]  | 
|
2646  | 
known_graph = self.revisions.get_known_graph_ancestry(revision_keys)  | 
|
2647  | 
return graph.GraphThunkIdsToKeys(known_graph)  | 
|
2648  | 
||
| 
2490.2.21
by Aaron Bentley
 Rename graph to deprecated_graph  | 
2649  | 
def get_graph(self, other_repository=None):  | 
| 
2490.2.13
by Aaron Bentley
 Update distinct -> lowest, refactor, add ParentsProvider concept  | 
2650  | 
"""Return the graph walker for this repository format"""  | 
2651  | 
parents_provider = self._make_parents_provider()  | 
|
| 
2490.2.14
by Aaron Bentley
 Avoid StackedParentsProvider when underlying repos match  | 
2652  | 
if (other_repository is not None and  | 
| 
3211.3.1
by Jelmer Vernooij
 Use convenience function to check whether two repository handles are referring to the same repository.  | 
2653  | 
not self.has_same_location(other_repository)):  | 
| 
4379.3.3
by Gary van der Merwe
 Rename and add doc string for StackedParentsProvider.  | 
2654  | 
parents_provider = graph.StackedParentsProvider(  | 
| 
2490.2.13
by Aaron Bentley
 Update distinct -> lowest, refactor, add ParentsProvider concept  | 
2655  | 
[parents_provider, other_repository._make_parents_provider()])  | 
| 
2490.2.22
by Aaron Bentley
 Rename GraphWalker -> Graph, _AncestryWalker -> _BreadthFirstSearcher  | 
2656  | 
return graph.Graph(parents_provider)  | 
| 
2490.2.13
by Aaron Bentley
 Update distinct -> lowest, refactor, add ParentsProvider concept  | 
2657  | 
|
| 
4332.3.15
by Robert Collins
 Keep an ancestors dict in check rather than recreating one multiple times.  | 
2658  | 
def _get_versioned_file_checker(self, text_key_references=None,  | 
2659  | 
ancestors=None):  | 
|
| 
4145.2.1
by Ian Clatworthy
 faster check  | 
2660  | 
"""Return an object suitable for checking versioned files.  | 
2661  | 
        
 | 
|
2662  | 
        :param text_key_references: if non-None, an already built
 | 
|
2663  | 
            dictionary mapping text keys ((fileid, revision_id) tuples)
 | 
|
2664  | 
            to whether they were referred to by the inventory of the
 | 
|
2665  | 
            revision_id that they contain. If None, this will be
 | 
|
2666  | 
            calculated.
 | 
|
| 
4332.3.15
by Robert Collins
 Keep an ancestors dict in check rather than recreating one multiple times.  | 
2667  | 
        :param ancestors: Optional result from
 | 
2668  | 
            self.get_graph().get_parent_map(self.all_revision_ids()) if already
 | 
|
2669  | 
            available.
 | 
|
| 
4145.2.1
by Ian Clatworthy
 faster check  | 
2670  | 
        """
 | 
2671  | 
return _VersionedFileChecker(self,  | 
|
| 
4332.3.15
by Robert Collins
 Keep an ancestors dict in check rather than recreating one multiple times.  | 
2672  | 
text_key_references=text_key_references, ancestors=ancestors)  | 
| 
2745.6.47
by Andrew Bennetts
 Move check_parents out of VersionedFile.  | 
2673  | 
|
| 
3184.1.9
by Robert Collins
 * ``Repository.get_data_stream`` is now deprecated in favour of  | 
2674  | 
def revision_ids_to_search_result(self, result_set):  | 
2675  | 
"""Convert a set of revision ids to a graph SearchResult."""  | 
|
2676  | 
result_parents = set()  | 
|
2677  | 
for parents in self.get_graph().get_parent_map(  | 
|
2678  | 
result_set).itervalues():  | 
|
2679  | 
result_parents.update(parents)  | 
|
2680  | 
included_keys = result_set.intersection(result_parents)  | 
|
2681  | 
start_keys = result_set.difference(included_keys)  | 
|
2682  | 
exclude_keys = result_parents.difference(result_set)  | 
|
2683  | 
result = graph.SearchResult(start_keys, exclude_keys,  | 
|
2684  | 
len(result_set), result_set)  | 
|
2685  | 
return result  | 
|
2686  | 
||
| 
1185.65.27
by Robert Collins
 Tweak storage towards mergability.  | 
2687  | 
    @needs_write_lock
 | 
| 
1534.6.5
by Robert Collins
 Cloning of repos preserves shared and make-working-tree attributes.  | 
2688  | 
def set_make_working_trees(self, new_value):  | 
2689  | 
"""Set the policy flag for making working trees when creating branches.  | 
|
2690  | 
||
2691  | 
        This only applies to branches that use this repository.
 | 
|
2692  | 
||
2693  | 
        The default is 'True'.
 | 
|
2694  | 
        :param new_value: True to restore the default, False to disable making
 | 
|
2695  | 
                          working trees.
 | 
|
2696  | 
        """
 | 
|
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2697  | 
raise NotImplementedError(self.set_make_working_trees)  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2698  | 
|
| 
1534.6.5
by Robert Collins
 Cloning of repos preserves shared and make-working-tree attributes.  | 
2699  | 
def make_working_trees(self):  | 
2700  | 
"""Returns the policy for making working trees on new branches."""  | 
|
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2701  | 
raise NotImplementedError(self.make_working_trees)  | 
| 
1534.6.5
by Robert Collins
 Cloning of repos preserves shared and make-working-tree attributes.  | 
2702  | 
|
2703  | 
    @needs_write_lock
 | 
|
| 
1185.65.1
by Aaron Bentley
 Refactored out ControlFiles and RevisionStore from _Branch  | 
2704  | 
def sign_revision(self, revision_id, gpg_strategy):  | 
2705  | 
plaintext = Testament.from_revision(self, revision_id).as_short_text()  | 
|
2706  | 
self.store_revision_signature(gpg_strategy, plaintext, revision_id)  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
2707  | 
|
| 
1563.2.29
by Robert Collins
 Remove all but fetch references to repository.revision_store.  | 
2708  | 
    @needs_read_lock
 | 
2709  | 
def has_signature_for_revision_id(self, revision_id):  | 
|
2710  | 
"""Query for a revision signature for revision_id in the repository."""  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2711  | 
if not self.has_revision(revision_id):  | 
2712  | 
raise errors.NoSuchRevision(self, revision_id)  | 
|
2713  | 
sig_present = (1 == len(  | 
|
2714  | 
self.signatures.get_parent_map([(revision_id,)])))  | 
|
2715  | 
return sig_present  | 
|
| 
1563.2.29
by Robert Collins
 Remove all but fetch references to repository.revision_store.  | 
2716  | 
|
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
2717  | 
    @needs_read_lock
 | 
2718  | 
def get_signature_text(self, revision_id):  | 
|
2719  | 
"""Return the text for a signature."""  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2720  | 
stream = self.signatures.get_record_stream([(revision_id,)],  | 
2721  | 
'unordered', True)  | 
|
2722  | 
record = stream.next()  | 
|
2723  | 
if record.storage_kind == 'absent':  | 
|
2724  | 
raise errors.NoSuchRevision(self, revision_id)  | 
|
2725  | 
return record.get_bytes_as('fulltext')  | 
|
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
2726  | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2727  | 
    @needs_read_lock
 | 
| 
4332.3.11
by Robert Collins
 Move tree and back callbacks into the repository check core.  | 
2728  | 
def check(self, revision_ids=None, callback_refs=None, check_repo=True):  | 
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2729  | 
"""Check consistency of all history of given revision_ids.  | 
2730  | 
||
2731  | 
        Different repository implementations should override _check().
 | 
|
2732  | 
||
2733  | 
        :param revision_ids: A non-empty list of revision_ids whose ancestry
 | 
|
2734  | 
             will be checked.  Typically the last revision_id of a branch.
 | 
|
| 
4332.3.11
by Robert Collins
 Move tree and back callbacks into the repository check core.  | 
2735  | 
        :param callback_refs: A dict of check-refs to resolve and callback
 | 
2736  | 
            the check/_check method on the items listed as wanting the ref.
 | 
|
2737  | 
            see bzrlib.check.
 | 
|
2738  | 
        :param check_repo: If False do not check the repository contents, just 
 | 
|
2739  | 
            calculate the data callback_refs requires and call them back.
 | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2740  | 
        """
 | 
| 
4332.3.11
by Robert Collins
 Move tree and back callbacks into the repository check core.  | 
2741  | 
return self._check(revision_ids, callback_refs=callback_refs,  | 
2742  | 
check_repo=check_repo)  | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2743  | 
|
| 
4332.3.11
by Robert Collins
 Move tree and back callbacks into the repository check core.  | 
2744  | 
def _check(self, revision_ids, callback_refs, check_repo):  | 
2745  | 
result = check.Check(self, check_repo=check_repo)  | 
|
2746  | 
result.check(callback_refs)  | 
|
| 
1732.2.4
by Martin Pool
 Split check into Branch.check and Repository.check  | 
2747  | 
return result  | 
2748  | 
||
| 
4840.2.7
by Vincent Ladeuil
 Move the _warn_if_deprecated call from repo.__init__ to  | 
2749  | 
def _warn_if_deprecated(self, branch=None):  | 
| 
1904.2.5
by Martin Pool
 Fix format warning inside test suite and add test  | 
2750  | 
global _deprecation_warning_done  | 
2751  | 
if _deprecation_warning_done:  | 
|
2752  | 
            return
 | 
|
| 
4840.2.7
by Vincent Ladeuil
 Move the _warn_if_deprecated call from repo.__init__ to  | 
2753  | 
try:  | 
2754  | 
if branch is None:  | 
|
2755  | 
conf = config.GlobalConfig()  | 
|
2756  | 
else:  | 
|
2757  | 
conf = branch.get_config()  | 
|
2758  | 
if conf.suppress_warning('format_deprecation'):  | 
|
2759  | 
                return
 | 
|
2760  | 
warning("Format %s for %s is deprecated -"  | 
|
2761  | 
                    " please use 'bzr upgrade' to get better performance"
 | 
|
2762  | 
% (self._format, self.bzrdir.transport.base))  | 
|
2763  | 
finally:  | 
|
2764  | 
_deprecation_warning_done = True  | 
|
| 
1904.2.3
by Martin Pool
 Give a warning on access to old repository formats  | 
2765  | 
|
| 
1910.2.63
by Aaron Bentley
 Add supports_rich_root member to repository  | 
2766  | 
def supports_rich_root(self):  | 
2767  | 
return self._format.rich_root_data  | 
|
2768  | 
||
| 
2150.2.2
by Robert Collins
 Change the commit builder selected-revision-id test to use a unicode revision id where possible, leading to stricter testing of the hypothetical unicode revision id support in bzr.  | 
2769  | 
def _check_ascii_revisionid(self, revision_id, method):  | 
2770  | 
"""Private helper for ascii-only repositories."""  | 
|
2771  | 
        # weave repositories refuse to store revisionids that are non-ascii.
 | 
|
2772  | 
if revision_id is not None:  | 
|
2773  | 
            # weaves require ascii revision ids.
 | 
|
2774  | 
if isinstance(revision_id, unicode):  | 
|
2775  | 
try:  | 
|
2776  | 
revision_id.encode('ascii')  | 
|
2777  | 
except UnicodeEncodeError:  | 
|
2778  | 
raise errors.NonAsciiRevisionId(method, self)  | 
|
| 
2249.5.12
by John Arbash Meinel
 Change the APIs for VersionedFile, Store, and some of Repository into utf-8  | 
2779  | 
else:  | 
2780  | 
try:  | 
|
2781  | 
revision_id.decode('ascii')  | 
|
2782  | 
except UnicodeDecodeError:  | 
|
2783  | 
raise errors.NonAsciiRevisionId(method, self)  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2784  | 
|
| 
2819.2.4
by Andrew Bennetts
 Add a 'revision_graph_can_have_wrong_parents' method to repository.  | 
2785  | 
def revision_graph_can_have_wrong_parents(self):  | 
2786  | 
"""Is it possible for this repository to have a revision graph with  | 
|
2787  | 
        incorrect parents?
 | 
|
| 
2150.2.2
by Robert Collins
 Change the commit builder selected-revision-id test to use a unicode revision id where possible, leading to stricter testing of the hypothetical unicode revision id support in bzr.  | 
2788  | 
|
| 
2819.2.4
by Andrew Bennetts
 Add a 'revision_graph_can_have_wrong_parents' method to repository.  | 
2789  | 
        If True, then this repository must also implement
 | 
2790  | 
        _find_inconsistent_revision_parents so that check and reconcile can
 | 
|
2791  | 
        check for inconsistencies before proceeding with other checks that may
 | 
|
2792  | 
        depend on the revision index being consistent.
 | 
|
2793  | 
        """
 | 
|
2794  | 
raise NotImplementedError(self.revision_graph_can_have_wrong_parents)  | 
|
| 
3184.1.9
by Robert Collins
 * ``Repository.get_data_stream`` is now deprecated in favour of  | 
2795  | 
|
2796  | 
||
| 
2241.1.18
by mbp at sourcefrog
 Restore use of deprecating delegator for old formats in bzrlib.repository.  | 
2797  | 
# remove these delegates a while after bzr 0.15
 | 
2798  | 
def __make_delegated(name, from_module):  | 
|
2799  | 
def _deprecated_repository_forwarder():  | 
|
2800  | 
symbol_versioning.warn('%s moved to %s in bzr 0.15'  | 
|
2801  | 
% (name, from_module),  | 
|
| 
2241.1.20
by mbp at sourcefrog
 update tests for new locations of weave repos  | 
2802  | 
DeprecationWarning,  | 
2803  | 
stacklevel=2)  | 
|
| 
2241.1.18
by mbp at sourcefrog
 Restore use of deprecating delegator for old formats in bzrlib.repository.  | 
2804  | 
m = __import__(from_module, globals(), locals(), [name])  | 
2805  | 
try:  | 
|
2806  | 
return getattr(m, name)  | 
|
2807  | 
except AttributeError:  | 
|
2808  | 
raise AttributeError('module %s has no name %s'  | 
|
2809  | 
% (m, name))  | 
|
2810  | 
globals()[name] = _deprecated_repository_forwarder  | 
|
2811  | 
||
2812  | 
for _name in [  | 
|
2813  | 
'AllInOneRepository',  | 
|
2814  | 
'WeaveMetaDirRepository',  | 
|
2815  | 
'PreSplitOutRepositoryFormat',  | 
|
2816  | 
'RepositoryFormat4',  | 
|
2817  | 
'RepositoryFormat5',  | 
|
2818  | 
'RepositoryFormat6',  | 
|
2819  | 
'RepositoryFormat7',  | 
|
2820  | 
        ]:
 | 
|
2821  | 
__make_delegated(_name, 'bzrlib.repofmt.weaverepo')  | 
|
2822  | 
||
2823  | 
for _name in [  | 
|
2824  | 
'KnitRepository',  | 
|
2825  | 
'RepositoryFormatKnit',  | 
|
2826  | 
'RepositoryFormatKnit1',  | 
|
2827  | 
        ]:
 | 
|
2828  | 
__make_delegated(_name, 'bzrlib.repofmt.knitrepo')  | 
|
2829  | 
||
2830  | 
||
| 
2996.2.2
by Aaron Bentley
 Create install_revisions function  | 
2831  | 
def install_revision(repository, rev, revision_tree):  | 
2832  | 
"""Install all revision data into a repository."""  | 
|
2833  | 
install_revisions(repository, [(rev, revision_tree, None)])  | 
|
2834  | 
||
2835  | 
||
| 
3146.6.1
by Aaron Bentley
 InterDifferingSerializer shows a progress bar  | 
2836  | 
def install_revisions(repository, iterable, num_revisions=None, pb=None):  | 
| 
2996.2.4
by Aaron Bentley
 Rename function to add_signature_text  | 
2837  | 
"""Install all revision data into a repository.  | 
2838  | 
||
2839  | 
    Accepts an iterable of revision, tree, signature tuples.  The signature
 | 
|
2840  | 
    may be None.
 | 
|
2841  | 
    """
 | 
|
| 
2592.3.96
by Robert Collins
 Merge index improvements (includes bzr.dev).  | 
2842  | 
repository.start_write_group()  | 
2843  | 
try:  | 
|
| 
3735.2.13
by Robert Collins
 Teach install_revisions to use inventory deltas when appropriate.  | 
2844  | 
inventory_cache = lru_cache.LRUCache(10)  | 
| 
3146.6.1
by Aaron Bentley
 InterDifferingSerializer shows a progress bar  | 
2845  | 
for n, (revision, revision_tree, signature) in enumerate(iterable):  | 
| 
3735.2.13
by Robert Collins
 Teach install_revisions to use inventory deltas when appropriate.  | 
2846  | 
_install_revision(repository, revision, revision_tree, signature,  | 
2847  | 
inventory_cache)  | 
|
| 
3146.6.1
by Aaron Bentley
 InterDifferingSerializer shows a progress bar  | 
2848  | 
if pb is not None:  | 
2849  | 
pb.update('Transferring revisions', n + 1, num_revisions)  | 
|
| 
2592.3.96
by Robert Collins
 Merge index improvements (includes bzr.dev).  | 
2850  | 
except:  | 
2851  | 
repository.abort_write_group()  | 
|
| 
2592.3.101
by Robert Collins
 Correctly propogate exceptions from repository.install_revisions.  | 
2852  | 
        raise
 | 
| 
2592.3.96
by Robert Collins
 Merge index improvements (includes bzr.dev).  | 
2853  | 
else:  | 
2854  | 
repository.commit_write_group()  | 
|
2855  | 
||
2856  | 
||
| 
3735.2.13
by Robert Collins
 Teach install_revisions to use inventory deltas when appropriate.  | 
2857  | 
def _install_revision(repository, rev, revision_tree, signature,  | 
2858  | 
inventory_cache):  | 
|
| 
2592.3.96
by Robert Collins
 Merge index improvements (includes bzr.dev).  | 
2859  | 
"""Install all revision data into a repository."""  | 
| 
1185.82.84
by Aaron Bentley
 Moved stuff around  | 
2860  | 
present_parents = []  | 
2861  | 
parent_trees = {}  | 
|
2862  | 
for p_id in rev.parent_ids:  | 
|
2863  | 
if repository.has_revision(p_id):  | 
|
2864  | 
present_parents.append(p_id)  | 
|
2865  | 
parent_trees[p_id] = repository.revision_tree(p_id)  | 
|
2866  | 
else:  | 
|
| 
3668.5.1
by Jelmer Vernooij
 Use NULL_REVISION rather than None for Repository.revision_tree().  | 
2867  | 
parent_trees[p_id] = repository.revision_tree(  | 
2868  | 
_mod_revision.NULL_REVISION)  | 
|
| 
1185.82.84
by Aaron Bentley
 Moved stuff around  | 
2869  | 
|
2870  | 
inv = revision_tree.inventory  | 
|
| 
1910.2.51
by Aaron Bentley
 Bundles now corrupt repositories  | 
2871  | 
entries = inv.iter_entries()  | 
| 
2617.6.6
by Robert Collins
 Some review feedback.  | 
2872  | 
    # backwards compatibility hack: skip the root id.
 | 
| 
1910.2.63
by Aaron Bentley
 Add supports_rich_root member to repository  | 
2873  | 
if not repository.supports_rich_root():  | 
| 
1910.2.60
by Aaron Bentley
 Ensure that new-model revisions aren't installed into old-model repos  | 
2874  | 
path, root = entries.next()  | 
2875  | 
if root.revision != rev.revision_id:  | 
|
| 
1910.2.63
by Aaron Bentley
 Add supports_rich_root member to repository  | 
2876  | 
raise errors.IncompatibleRevision(repr(repository))  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2877  | 
text_keys = {}  | 
2878  | 
for path, ie in entries:  | 
|
2879  | 
text_keys[(ie.file_id, ie.revision)] = ie  | 
|
2880  | 
text_parent_map = repository.texts.get_parent_map(text_keys)  | 
|
2881  | 
missing_texts = set(text_keys) - set(text_parent_map)  | 
|
| 
1185.82.84
by Aaron Bentley
 Moved stuff around  | 
2882  | 
    # Add the texts that are not already present
 | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2883  | 
for text_key in missing_texts:  | 
2884  | 
ie = text_keys[text_key]  | 
|
2885  | 
text_parents = []  | 
|
2886  | 
        # FIXME: TODO: The following loop overlaps/duplicates that done by
 | 
|
2887  | 
        # commit to determine parents. There is a latent/real bug here where
 | 
|
2888  | 
        # the parents inserted are not those commit would do - in particular
 | 
|
2889  | 
        # they are not filtered by heads(). RBC, AB
 | 
|
2890  | 
for revision, tree in parent_trees.iteritems():  | 
|
2891  | 
if ie.file_id not in tree:  | 
|
2892  | 
                continue
 | 
|
2893  | 
parent_id = tree.inventory[ie.file_id].revision  | 
|
2894  | 
if parent_id in text_parents:  | 
|
2895  | 
                continue
 | 
|
2896  | 
text_parents.append((ie.file_id, parent_id))  | 
|
2897  | 
lines = revision_tree.get_file(ie.file_id).readlines()  | 
|
2898  | 
repository.texts.add_lines(text_key, text_parents, lines)  | 
|
| 
1185.82.84
by Aaron Bentley
 Moved stuff around  | 
2899  | 
try:  | 
2900  | 
        # install the inventory
 | 
|
| 
3735.2.13
by Robert Collins
 Teach install_revisions to use inventory deltas when appropriate.  | 
2901  | 
if repository._format._commit_inv_deltas and len(rev.parent_ids):  | 
2902  | 
            # Cache this inventory
 | 
|
2903  | 
inventory_cache[rev.revision_id] = inv  | 
|
2904  | 
try:  | 
|
2905  | 
basis_inv = inventory_cache[rev.parent_ids[0]]  | 
|
2906  | 
except KeyError:  | 
|
2907  | 
repository.add_inventory(rev.revision_id, inv, present_parents)  | 
|
2908  | 
else:  | 
|
| 
3735.2.47
by Robert Collins
 Move '_make_inv_delta' onto Inventory (UNTESTED).  | 
2909  | 
delta = inv._make_delta(basis_inv)  | 
| 
3735.13.4
by John Arbash Meinel
 Track down more code paths that were broken by the merge.  | 
2910  | 
repository.add_inventory_by_delta(rev.parent_ids[0], delta,  | 
| 
3735.2.13
by Robert Collins
 Teach install_revisions to use inventory deltas when appropriate.  | 
2911  | 
rev.revision_id, present_parents)  | 
2912  | 
else:  | 
|
2913  | 
repository.add_inventory(rev.revision_id, inv, present_parents)  | 
|
| 
1185.82.84
by Aaron Bentley
 Moved stuff around  | 
2914  | 
except errors.RevisionAlreadyPresent:  | 
2915  | 
        pass
 | 
|
| 
2996.2.1
by Aaron Bentley
 Add KnitRepositoryFormat4  | 
2916  | 
if signature is not None:  | 
| 
2996.2.8
by Aaron Bentley
 Fix add_signature discrepancies  | 
2917  | 
repository.add_signature_text(rev.revision_id, signature)  | 
| 
1185.82.84
by Aaron Bentley
 Moved stuff around  | 
2918  | 
repository.add_revision(rev.revision_id, rev, inv)  | 
2919  | 
||
2920  | 
||
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
2921  | 
class MetaDirRepository(Repository):  | 
| 
3407.2.13
by Martin Pool
 Remove indirection through control_files to get transports  | 
2922  | 
"""Repositories in the new meta-dir layout.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2923  | 
|
| 
3407.2.13
by Martin Pool
 Remove indirection through control_files to get transports  | 
2924  | 
    :ivar _transport: Transport for access to repository control files,
 | 
2925  | 
        typically pointing to .bzr/repository.
 | 
|
2926  | 
    """
 | 
|
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
2927  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2928  | 
def __init__(self, _format, a_bzrdir, control_files):  | 
2929  | 
super(MetaDirRepository, self).__init__(_format, a_bzrdir, control_files)  | 
|
| 
3407.2.3
by Martin Pool
 Branch and Repository use their own ._transport rather than going through .control_files  | 
2930  | 
self._transport = control_files._transport  | 
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
2931  | 
|
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2932  | 
def is_shared(self):  | 
2933  | 
"""Return True if this repository is flagged as a shared repository."""  | 
|
| 
3407.2.3
by Martin Pool
 Branch and Repository use their own ._transport rather than going through .control_files  | 
2934  | 
return self._transport.has('shared-storage')  | 
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2935  | 
|
2936  | 
    @needs_write_lock
 | 
|
2937  | 
def set_make_working_trees(self, new_value):  | 
|
2938  | 
"""Set the policy flag for making working trees when creating branches.  | 
|
2939  | 
||
2940  | 
        This only applies to branches that use this repository.
 | 
|
2941  | 
||
2942  | 
        The default is 'True'.
 | 
|
2943  | 
        :param new_value: True to restore the default, False to disable making
 | 
|
2944  | 
                          working trees.
 | 
|
2945  | 
        """
 | 
|
2946  | 
if new_value:  | 
|
2947  | 
try:  | 
|
| 
3407.2.3
by Martin Pool
 Branch and Repository use their own ._transport rather than going through .control_files  | 
2948  | 
self._transport.delete('no-working-trees')  | 
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2949  | 
except errors.NoSuchFile:  | 
2950  | 
                pass
 | 
|
2951  | 
else:  | 
|
| 
3407.2.5
by Martin Pool
 Deprecate LockableFiles.put_utf8  | 
2952  | 
self._transport.put_bytes('no-working-trees', '',  | 
| 
3407.2.18
by Martin Pool
 BzrDir takes responsibility for default file/dir modes  | 
2953  | 
mode=self.bzrdir._get_file_mode())  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
2954  | 
|
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2955  | 
def make_working_trees(self):  | 
2956  | 
"""Returns the policy for making working trees on new branches."""  | 
|
| 
3407.2.3
by Martin Pool
 Branch and Repository use their own ._transport rather than going through .control_files  | 
2957  | 
return not self._transport.has('no-working-trees')  | 
| 
1596.2.12
by Robert Collins
 Merge and make Knit Repository use the revision store for all possible queries.  | 
2958  | 
|
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
2959  | 
|
| 
3316.2.3
by Robert Collins
 Remove manual notification of transaction finishing on versioned files.  | 
2960  | 
class MetaDirVersionedFileRepository(MetaDirRepository):  | 
2961  | 
"""Repositories in a meta-dir, that work via versioned file objects."""  | 
|
2962  | 
||
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2963  | 
def __init__(self, _format, a_bzrdir, control_files):  | 
| 
3316.2.5
by Robert Collins
 Review feedback.  | 
2964  | 
super(MetaDirVersionedFileRepository, self).__init__(_format, a_bzrdir,  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
2965  | 
control_files)  | 
| 
3316.2.3
by Robert Collins
 Remove manual notification of transaction finishing on versioned files.  | 
2966  | 
|
2967  | 
||
| 
4032.3.1
by Robert Collins
 Add a BranchFormat.network_name() method as preparation for creating branches via RPC calls.  | 
2968  | 
network_format_registry = registry.FormatRegistry()  | 
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
2969  | 
"""Registry of formats indexed by their network name.
 | 
2970  | 
||
2971  | 
The network name for a repository format is an identifier that can be used when
 | 
|
2972  | 
referring to formats with smart server operations. See
 | 
|
2973  | 
RepositoryFormat.network_name() for more detail.
 | 
|
2974  | 
"""
 | 
|
| 
3990.5.1
by Andrew Bennetts
 Add network_name() to RepositoryFormat.  | 
2975  | 
|
2976  | 
||
| 
4032.3.1
by Robert Collins
 Add a BranchFormat.network_name() method as preparation for creating branches via RPC calls.  | 
2977  | 
format_registry = registry.FormatRegistry(network_format_registry)  | 
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
2978  | 
"""Registry of formats, indexed by their BzrDirMetaFormat format string.
 | 
| 
2241.1.11
by Martin Pool
 Get rid of RepositoryFormat*_instance objects. Instead the format  | 
2979  | 
|
2980  | 
This can contain either format instances themselves, or classes/factories that
 | 
|
2981  | 
can be called to obtain one.
 | 
|
2982  | 
"""
 | 
|
| 
2241.1.2
by Martin Pool
 change to using external Repository format registry  | 
2983  | 
|
| 
2220.2.3
by Martin Pool
 Add tag: revision namespace.  | 
2984  | 
|
2985  | 
#####################################################################
 | 
|
2986  | 
# Repository Formats
 | 
|
| 
1910.2.46
by Aaron Bentley
 Whitespace fix  | 
2987  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
2988  | 
class RepositoryFormat(object):  | 
2989  | 
"""A repository format.  | 
|
2990  | 
||
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
2991  | 
    Formats provide four things:
 | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
2992  | 
     * An initialization routine to construct repository data on disk.
 | 
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
2993  | 
     * a optional format string which is used when the BzrDir supports
 | 
2994  | 
       versioned children.
 | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
2995  | 
     * an open routine which returns a Repository instance.
 | 
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
2996  | 
     * A network name for referring to the format in smart server RPC
 | 
2997  | 
       methods.
 | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
2998  | 
|
| 
2889.1.2
by Robert Collins
 Review feedback.  | 
2999  | 
    There is one and only one Format subclass for each on-disk format. But
 | 
3000  | 
    there can be one Repository subclass that is used for several different
 | 
|
3001  | 
    formats. The _format attribute on a Repository instance can be used to
 | 
|
3002  | 
    determine the disk format.
 | 
|
| 
2889.1.1
by Robert Collins
 * The class ``bzrlib.repofmt.knitrepo.KnitRepository3`` has been folded into  | 
3003  | 
|
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
3004  | 
    Formats are placed in a registry by their format string for reference
 | 
3005  | 
    during opening. These should be subclasses of RepositoryFormat for
 | 
|
3006  | 
    consistency.
 | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3007  | 
|
3008  | 
    Once a format is deprecated, just deprecate the initialize and open
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3009  | 
    methods on the format class. Do not deprecate the object, as the
 | 
| 
4031.3.1
by Frank Aspell
 Fixing various typos  | 
3010  | 
    object may be created even when a repository instance hasn't been
 | 
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
3011  | 
    created.
 | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3012  | 
|
3013  | 
    Common instance attributes:
 | 
|
3014  | 
    _matchingbzrdir - the bzrdir format that the repository format was
 | 
|
3015  | 
    originally written to work with. This can be used if manually
 | 
|
3016  | 
    constructing a bzrdir and repository, or more commonly for test suite
 | 
|
| 
3128.1.3
by Vincent Ladeuil
 Since we are there s/parameteris.*/parameteriz&/.  | 
3017  | 
    parameterization.
 | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3018  | 
    """
 | 
3019  | 
||
| 
2949.1.2
by Robert Collins
 * Fetch with pack repositories will no longer read the entire history graph.  | 
3020  | 
    # Set to True or False in derived classes. True indicates that the format
 | 
3021  | 
    # supports ghosts gracefully.
 | 
|
3022  | 
supports_ghosts = None  | 
|
| 
3221.3.1
by Robert Collins
 * Repository formats have a new supported-feature attribute  | 
3023  | 
    # Can this repository be given external locations to lookup additional
 | 
3024  | 
    # data. Set to True or False in derived classes.
 | 
|
3025  | 
supports_external_lookups = None  | 
|
| 
3735.2.1
by Robert Collins
 Add the concept of CHK lookups to Repository.  | 
3026  | 
    # Does this format support CHK bytestring lookups. Set to True or False in
 | 
3027  | 
    # derived classes.
 | 
|
3028  | 
supports_chks = None  | 
|
| 
3735.2.12
by Robert Collins
 Implement commit-via-deltas for split inventory repositories.  | 
3029  | 
    # Should commit add an inventory, or an inventory delta to the repository.
 | 
3030  | 
_commit_inv_deltas = True  | 
|
| 
4053.1.4
by Robert Collins
 Move the fetch control attributes from Repository to RepositoryFormat.  | 
3031  | 
    # What order should fetch operations request streams in?
 | 
3032  | 
    # The default is unordered as that is the cheapest for an origin to
 | 
|
3033  | 
    # provide.
 | 
|
3034  | 
_fetch_order = 'unordered'  | 
|
3035  | 
    # Does this repository format use deltas that can be fetched as-deltas ?
 | 
|
3036  | 
    # (E.g. knits, where the knit deltas can be transplanted intact.
 | 
|
3037  | 
    # We default to False, which will ensure that enough data to get
 | 
|
3038  | 
    # a full text out of any fetch stream will be grabbed.
 | 
|
3039  | 
_fetch_uses_deltas = False  | 
|
3040  | 
    # Should fetch trigger a reconcile after the fetch? Only needed for
 | 
|
3041  | 
    # some repository formats that can suffer internal inconsistencies.
 | 
|
3042  | 
_fetch_reconcile = False  | 
|
| 
4183.5.1
by Robert Collins
 Add RepositoryFormat.fast_deltas to signal fast delta creation.  | 
3043  | 
    # Does this format have < O(tree_size) delta generation. Used to hint what
 | 
3044  | 
    # code path for commit, amongst other things.
 | 
|
3045  | 
fast_deltas = None  | 
|
| 
4431.3.7
by Jonathan Lange
 Cherrypick bzr.dev 4470, resolving conflicts.  | 
3046  | 
    # Does doing a pack operation compress data? Useful for the pack UI command
 | 
3047  | 
    # (so if there is one pack, the operation can still proceed because it may
 | 
|
3048  | 
    # help), and for fetching when data won't have come from the same
 | 
|
3049  | 
    # compressor.
 | 
|
3050  | 
pack_compresses = False  | 
|
| 
4606.4.1
by Robert Collins
 Prepare test_repository's inter_repository tests for 2a.  | 
3051  | 
    # Does the repository inventory storage understand references to trees?
 | 
3052  | 
supports_tree_reference = None  | 
|
| 
4988.9.1
by Jelmer Vernooij
 Add experimental flag to RepositoryFormat.  | 
3053  | 
    # Is the format experimental ?
 | 
3054  | 
experimental = False  | 
|
| 
2949.1.2
by Robert Collins
 * Fetch with pack repositories will no longer read the entire history graph.  | 
3055  | 
|
| 
4634.144.4
by Martin Pool
 Show network name in RemoteRepositoryFormat repr  | 
3056  | 
def __repr__(self):  | 
3057  | 
return "%s()" % self.__class__.__name__  | 
|
| 
1904.2.3
by Martin Pool
 Give a warning on access to old repository formats  | 
3058  | 
|
| 
2241.1.11
by Martin Pool
 Get rid of RepositoryFormat*_instance objects. Instead the format  | 
3059  | 
def __eq__(self, other):  | 
3060  | 
        # format objects are generally stateless
 | 
|
3061  | 
return isinstance(other, self.__class__)  | 
|
3062  | 
||
| 
2100.3.35
by Aaron Bentley
 equality operations on bzrdir  | 
3063  | 
def __ne__(self, other):  | 
| 
2100.3.31
by Aaron Bentley
 Merged bzr.dev (17 tests failing)  | 
3064  | 
return not self == other  | 
3065  | 
||
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3066  | 
    @classmethod
 | 
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3067  | 
def find_format(klass, a_bzrdir):  | 
| 
2241.1.1
by Martin Pool
 Change RepositoryFormat to use a Registry rather than ad-hoc dictionary  | 
3068  | 
"""Return the format for the repository object in a_bzrdir.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3069  | 
|
| 
2241.1.1
by Martin Pool
 Change RepositoryFormat to use a Registry rather than ad-hoc dictionary  | 
3070  | 
        This is used by bzr native formats that have a "format" file in
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3071  | 
        the repository.  Other methods may be used by different types of
 | 
| 
2241.1.1
by Martin Pool
 Change RepositoryFormat to use a Registry rather than ad-hoc dictionary  | 
3072  | 
        control directory.
 | 
3073  | 
        """
 | 
|
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3074  | 
try:  | 
3075  | 
transport = a_bzrdir.get_repository_transport(None)  | 
|
| 
4852.1.6
by John Arbash Meinel
 Read the repository format file via get_bytes  | 
3076  | 
format_string = transport.get_bytes("format")  | 
| 
2241.1.2
by Martin Pool
 change to using external Repository format registry  | 
3077  | 
return format_registry.get(format_string)  | 
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3078  | 
except errors.NoSuchFile:  | 
3079  | 
raise errors.NoRepositoryPresent(a_bzrdir)  | 
|
3080  | 
except KeyError:  | 
|
| 
3246.3.2
by Daniel Watkins
 Modified uses of errors.UnknownFormatError.  | 
3081  | 
raise errors.UnknownFormatError(format=format_string,  | 
3082  | 
kind='repository')  | 
|
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3083  | 
|
| 
2241.1.1
by Martin Pool
 Change RepositoryFormat to use a Registry rather than ad-hoc dictionary  | 
3084  | 
    @classmethod
 | 
| 
2241.1.2
by Martin Pool
 change to using external Repository format registry  | 
3085  | 
def register_format(klass, format):  | 
3086  | 
format_registry.register(format.get_format_string(), format)  | 
|
| 
2241.1.1
by Martin Pool
 Change RepositoryFormat to use a Registry rather than ad-hoc dictionary  | 
3087  | 
|
3088  | 
    @classmethod
 | 
|
3089  | 
def unregister_format(klass, format):  | 
|
| 
2241.1.2
by Martin Pool
 change to using external Repository format registry  | 
3090  | 
format_registry.remove(format.get_format_string())  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3091  | 
|
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3092  | 
    @classmethod
 | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3093  | 
def get_default_format(klass):  | 
3094  | 
"""Return the current default format."""  | 
|
| 
2204.5.3
by Aaron Bentley
 zap old repository default handling  | 
3095  | 
from bzrlib import bzrdir  | 
3096  | 
return bzrdir.format_registry.make_bzrdir('default').repository_format  | 
|
| 
2241.1.1
by Martin Pool
 Change RepositoryFormat to use a Registry rather than ad-hoc dictionary  | 
3097  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3098  | 
def get_format_string(self):  | 
3099  | 
"""Return the ASCII format string that identifies this format.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3100  | 
|
3101  | 
        Note that in pre format ?? repositories the format string is
 | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3102  | 
        not permitted nor written to disk.
 | 
3103  | 
        """
 | 
|
3104  | 
raise NotImplementedError(self.get_format_string)  | 
|
3105  | 
||
| 
1624.3.19
by Olaf Conradi
 New call get_format_description to give a user-friendly description of a  | 
3106  | 
def get_format_description(self):  | 
| 
1759.2.1
by Jelmer Vernooij
 Fix some types (found using aspell).  | 
3107  | 
"""Return the short description for this format."""  | 
| 
1624.3.19
by Olaf Conradi
 New call get_format_description to give a user-friendly description of a  | 
3108  | 
raise NotImplementedError(self.get_format_description)  | 
3109  | 
||
| 
2241.1.6
by Martin Pool
 Move Knit repositories into the submodule bzrlib.repofmt.knitrepo and  | 
3110  | 
    # TODO: this shouldn't be in the base class, it's specific to things that
 | 
3111  | 
    # use weaves or knits -- mbp 20070207
 | 
|
| 
1563.2.17
by Robert Collins
 Change knits repositories to use a knit versioned file store for file texts.  | 
3112  | 
def _get_versioned_file_store(self,  | 
3113  | 
name,  | 
|
3114  | 
transport,  | 
|
3115  | 
control_files,  | 
|
3116  | 
prefixed=True,  | 
|
| 
2241.1.10
by Martin Pool
 Remove more references to weaves from the repository.py file  | 
3117  | 
versionedfile_class=None,  | 
| 
1946.2.5
by John Arbash Meinel
 Make knit stores delay creation, but not control stores  | 
3118  | 
versionedfile_kwargs={},  | 
| 
1608.2.12
by Martin Pool
 Store-escaping must quote uppercase characters too, so that they're safely  | 
3119  | 
escaped=False):  | 
| 
2241.1.10
by Martin Pool
 Remove more references to weaves from the repository.py file  | 
3120  | 
if versionedfile_class is None:  | 
3121  | 
versionedfile_class = self._versionedfile_class  | 
|
| 
1563.2.17
by Robert Collins
 Change knits repositories to use a knit versioned file store for file texts.  | 
3122  | 
weave_transport = control_files._transport.clone(name)  | 
3123  | 
dir_mode = control_files._dir_mode  | 
|
3124  | 
file_mode = control_files._file_mode  | 
|
3125  | 
return VersionedFileStore(weave_transport, prefixed=prefixed,  | 
|
| 
1608.2.12
by Martin Pool
 Store-escaping must quote uppercase characters too, so that they're safely  | 
3126  | 
dir_mode=dir_mode,  | 
3127  | 
file_mode=file_mode,  | 
|
3128  | 
versionedfile_class=versionedfile_class,  | 
|
| 
1946.2.5
by John Arbash Meinel
 Make knit stores delay creation, but not control stores  | 
3129  | 
versionedfile_kwargs=versionedfile_kwargs,  | 
| 
1608.2.12
by Martin Pool
 Store-escaping must quote uppercase characters too, so that they're safely  | 
3130  | 
escaped=escaped)  | 
| 
1563.2.17
by Robert Collins
 Change knits repositories to use a knit versioned file store for file texts.  | 
3131  | 
|
| 
1534.6.1
by Robert Collins
 allow API creation of shared repositories  | 
3132  | 
def initialize(self, a_bzrdir, shared=False):  | 
3133  | 
"""Initialize a repository of this format in a_bzrdir.  | 
|
3134  | 
||
3135  | 
        :param a_bzrdir: The bzrdir to put the new repository in it.
 | 
|
3136  | 
        :param shared: The repository should be initialized as a sharable one.
 | 
|
| 
1752.2.52
by Andrew Bennetts
 Flesh out more Remote* methods needed to open and initialise remote branches/trees/repositories.  | 
3137  | 
        :returns: The new repository object.
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3138  | 
|
| 
1534.6.1
by Robert Collins
 allow API creation of shared repositories  | 
3139  | 
        This may raise UninitializableFormat if shared repository are not
 | 
3140  | 
        compatible the a_bzrdir.
 | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3141  | 
        """
 | 
| 
1752.2.52
by Andrew Bennetts
 Flesh out more Remote* methods needed to open and initialise remote branches/trees/repositories.  | 
3142  | 
raise NotImplementedError(self.initialize)  | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3143  | 
|
3144  | 
def is_supported(self):  | 
|
3145  | 
"""Is this format supported?  | 
|
3146  | 
||
3147  | 
        Supported formats must be initializable and openable.
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3148  | 
        Unsupported formats may not support initialization or committing or
 | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3149  | 
        some other features depending on the reason for not being supported.
 | 
3150  | 
        """
 | 
|
3151  | 
return True  | 
|
3152  | 
||
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
3153  | 
def network_name(self):  | 
3154  | 
"""A simple byte string uniquely identifying this format for RPC calls.  | 
|
3155  | 
||
3156  | 
        MetaDir repository formats use their disk format string to identify the
 | 
|
3157  | 
        repository over the wire. All in one formats such as bzr < 0.8, and
 | 
|
3158  | 
        foreign formats like svn/git and hg should use some marker which is
 | 
|
3159  | 
        unique and immutable.
 | 
|
3160  | 
        """
 | 
|
3161  | 
raise NotImplementedError(self.network_name)  | 
|
3162  | 
||
| 
1910.2.12
by Aaron Bentley
 Implement knit repo format 2  | 
3163  | 
def check_conversion_target(self, target_format):  | 
| 
4608.1.4
by Martin Pool
 Move copy\&pasted check_conversion_target into RepositoryFormat base class  | 
3164  | 
if self.rich_root_data and not target_format.rich_root_data:  | 
3165  | 
raise errors.BadConversionTarget(  | 
|
3166  | 
'Does not support rich root data.', target_format,  | 
|
3167  | 
from_format=self)  | 
|
3168  | 
if (self.supports_tree_reference and  | 
|
3169  | 
not getattr(target_format, 'supports_tree_reference', False)):  | 
|
3170  | 
raise errors.BadConversionTarget(  | 
|
3171  | 
'Does not support nested trees', target_format,  | 
|
3172  | 
from_format=self)  | 
|
| 
1910.2.12
by Aaron Bentley
 Implement knit repo format 2  | 
3173  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3174  | 
def open(self, a_bzrdir, _found=False):  | 
3175  | 
"""Return an instance of this format for the bzrdir a_bzrdir.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3176  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3177  | 
        _found is a private parameter, do not use it.
 | 
3178  | 
        """
 | 
|
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
3179  | 
raise NotImplementedError(self.open)  | 
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3180  | 
|
| 
5107.3.1
by Marco Pantaleoni
 Added the new hooks 'post_branch', 'post_switch' and 'post_repo_init',  | 
3181  | 
def _run_post_repo_init_hooks(self, repository, a_bzrdir, shared):  | 
3182  | 
from bzrlib.bzrdir import BzrDir, RepoInitHookParams  | 
|
3183  | 
hooks = BzrDir.hooks['post_repo_init']  | 
|
3184  | 
if not hooks:  | 
|
3185  | 
            return
 | 
|
3186  | 
params = RepoInitHookParams(repository, self, a_bzrdir, shared)  | 
|
3187  | 
for hook in hooks:  | 
|
3188  | 
hook(params)  | 
|
3189  | 
||
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
3190  | 
|
3191  | 
class MetaDirRepositoryFormat(RepositoryFormat):  | 
|
| 
1759.2.1
by Jelmer Vernooij
 Fix some types (found using aspell).  | 
3192  | 
"""Common base class for the new repositories using the metadir layout."""  | 
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
3193  | 
|
| 
1910.2.14
by Aaron Bentley
 Fail when trying to use interrepository on Knit2 and Knit1  | 
3194  | 
rich_root_data = False  | 
| 
2323.5.17
by Martin Pool
 Add supports_tree_reference to all repo formats (robert)  | 
3195  | 
supports_tree_reference = False  | 
| 
3221.3.1
by Robert Collins
 * Repository formats have a new supported-feature attribute  | 
3196  | 
supports_external_lookups = False  | 
| 
3845.1.1
by John Arbash Meinel
 Ensure that RepositoryFormat._matchingbzrdir.repository_format matches.  | 
3197  | 
|
3198  | 
    @property
 | 
|
3199  | 
def _matchingbzrdir(self):  | 
|
3200  | 
matching = bzrdir.BzrDirMetaFormat1()  | 
|
3201  | 
matching.repository_format = self  | 
|
3202  | 
return matching  | 
|
| 
1910.2.14
by Aaron Bentley
 Fail when trying to use interrepository on Knit2 and Knit1  | 
3203  | 
|
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
3204  | 
def __init__(self):  | 
3205  | 
super(MetaDirRepositoryFormat, self).__init__()  | 
|
3206  | 
||
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
3207  | 
def _create_control_files(self, a_bzrdir):  | 
3208  | 
"""Create the required files and the initial control_files object."""  | 
|
| 
1759.2.2
by Jelmer Vernooij
 Revert some of my spelling fixes and fix some typos after review by Aaron.  | 
3209  | 
        # FIXME: RBC 20060125 don't peek under the covers
 | 
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3210  | 
        # NB: no need to escape relative paths that are url safe.
 | 
3211  | 
repository_transport = a_bzrdir.get_repository_transport(self)  | 
|
| 
1996.3.4
by John Arbash Meinel
 lazy_import bzrlib/repository.py  | 
3212  | 
control_files = lockable_files.LockableFiles(repository_transport,  | 
3213  | 
'lock', lockdir.LockDir)  | 
|
| 
1553.5.61
by Martin Pool
 Locks protecting LockableFiles must now be explicitly created before use.  | 
3214  | 
control_files.create_lock()  | 
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
3215  | 
return control_files  | 
3216  | 
||
3217  | 
def _upload_blank_content(self, a_bzrdir, dirs, files, utf8_files, shared):  | 
|
3218  | 
"""Upload the initial blank content."""  | 
|
3219  | 
control_files = self._create_control_files(a_bzrdir)  | 
|
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3220  | 
control_files.lock_write()  | 
| 
3407.2.4
by Martin Pool
 Small cleanups to initial creation of repository files  | 
3221  | 
transport = control_files._transport  | 
3222  | 
if shared == True:  | 
|
3223  | 
utf8_files += [('shared-storage', '')]  | 
|
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3224  | 
try:  | 
| 
3407.2.18
by Martin Pool
 BzrDir takes responsibility for default file/dir modes  | 
3225  | 
transport.mkdir_multi(dirs, mode=a_bzrdir._get_dir_mode())  | 
| 
3407.2.4
by Martin Pool
 Small cleanups to initial creation of repository files  | 
3226  | 
for (filename, content_stream) in files:  | 
3227  | 
transport.put_file(filename, content_stream,  | 
|
| 
3407.2.18
by Martin Pool
 BzrDir takes responsibility for default file/dir modes  | 
3228  | 
mode=a_bzrdir._get_file_mode())  | 
| 
3407.2.4
by Martin Pool
 Small cleanups to initial creation of repository files  | 
3229  | 
for (filename, content_bytes) in utf8_files:  | 
3230  | 
transport.put_bytes_non_atomic(filename, content_bytes,  | 
|
| 
3407.2.18
by Martin Pool
 BzrDir takes responsibility for default file/dir modes  | 
3231  | 
mode=a_bzrdir._get_file_mode())  | 
| 
1534.4.47
by Robert Collins
 Split out repository into .bzr/repository  | 
3232  | 
finally:  | 
3233  | 
control_files.unlock()  | 
|
| 
1556.1.3
by Robert Collins
 Rearrangment of Repository logic to be less type code driven, and bugfix InterRepository.missing_revision_ids  | 
3234  | 
|
| 
3990.5.1
by Andrew Bennetts
 Add network_name() to RepositoryFormat.  | 
3235  | 
def network_name(self):  | 
3236  | 
"""Metadir formats have matching disk and network format strings."""  | 
|
3237  | 
return self.get_format_string()  | 
|
3238  | 
||
3239  | 
||
| 
3990.5.3
by Robert Collins
 Docs and polish on RepositoryFormat.network_name.  | 
3240  | 
# Pre-0.8 formats that don't have a disk format string (because they are
 | 
3241  | 
# versioned by the matching control directory). We use the control directories
 | 
|
3242  | 
# disk format string as a key for the network_name because they meet the
 | 
|
| 
4031.3.1
by Frank Aspell
 Fixing various typos  | 
3243  | 
# constraints (simple string, unique, immutable).
 | 
| 
3990.5.1
by Andrew Bennetts
 Add network_name() to RepositoryFormat.  | 
3244  | 
network_format_registry.register_lazy(  | 
3245  | 
"Bazaar-NG branch, format 5\n",  | 
|
3246  | 
'bzrlib.repofmt.weaverepo',  | 
|
3247  | 
'RepositoryFormat5',  | 
|
3248  | 
)
 | 
|
3249  | 
network_format_registry.register_lazy(  | 
|
3250  | 
"Bazaar-NG branch, format 6\n",  | 
|
3251  | 
'bzrlib.repofmt.weaverepo',  | 
|
3252  | 
'RepositoryFormat6',  | 
|
3253  | 
)
 | 
|
3254  | 
||
3255  | 
# formats which have no format string are not discoverable or independently
 | 
|
| 
4032.1.1
by John Arbash Meinel
 Merge the removal of all trailing whitespace, and resolve conflicts.  | 
3256  | 
# creatable on disk, so are not registered in format_registry.  They're
 | 
| 
2241.1.11
by Martin Pool
 Get rid of RepositoryFormat*_instance objects. Instead the format  | 
3257  | 
# all in bzrlib.repofmt.weaverepo now.  When an instance of one of these is
 | 
3258  | 
# needed, it's constructed directly by the BzrDir.  Non-native formats where
 | 
|
3259  | 
# the repository is not separately opened are similar.
 | 
|
3260  | 
||
| 
2241.1.4
by Martin Pool
 Moved old weave-based repository formats into bzrlib.repofmt.weaverepo.  | 
3261  | 
format_registry.register_lazy(  | 
3262  | 
'Bazaar-NG Repository format 7',  | 
|
3263  | 
'bzrlib.repofmt.weaverepo',  | 
|
| 
2241.1.11
by Martin Pool
 Get rid of RepositoryFormat*_instance objects. Instead the format  | 
3264  | 
    'RepositoryFormat7'
 | 
| 
2241.1.4
by Martin Pool
 Moved old weave-based repository formats into bzrlib.repofmt.weaverepo.  | 
3265  | 
    )
 | 
| 
2592.3.22
by Robert Collins
 Add new experimental repository formats.  | 
3266  | 
|
| 
2241.1.6
by Martin Pool
 Move Knit repositories into the submodule bzrlib.repofmt.knitrepo and  | 
3267  | 
format_registry.register_lazy(  | 
3268  | 
'Bazaar-NG Knit Repository Format 1',  | 
|
3269  | 
'bzrlib.repofmt.knitrepo',  | 
|
| 
2241.1.11
by Martin Pool
 Get rid of RepositoryFormat*_instance objects. Instead the format  | 
3270  | 
'RepositoryFormatKnit1',  | 
| 
2241.1.6
by Martin Pool
 Move Knit repositories into the submodule bzrlib.repofmt.knitrepo and  | 
3271  | 
    )
 | 
3272  | 
||
| 
2241.1.5
by Martin Pool
 Move KnitFormat2 into repofmt  | 
3273  | 
format_registry.register_lazy(  | 
| 
2255.2.230
by Robert Collins
 Update tree format signatures to mention introducing bzr version.  | 
3274  | 
'Bazaar Knit Repository Format 3 (bzr 0.15)\n',  | 
| 
2100.3.31
by Aaron Bentley
 Merged bzr.dev (17 tests failing)  | 
3275  | 
'bzrlib.repofmt.knitrepo',  | 
3276  | 
'RepositoryFormatKnit3',  | 
|
3277  | 
    )
 | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3278  | 
|
| 
2996.2.1
by Aaron Bentley
 Add KnitRepositoryFormat4  | 
3279  | 
format_registry.register_lazy(  | 
3280  | 
'Bazaar Knit Repository Format 4 (bzr 1.0)\n',  | 
|
3281  | 
'bzrlib.repofmt.knitrepo',  | 
|
3282  | 
'RepositoryFormatKnit4',  | 
|
3283  | 
    )
 | 
|
3284  | 
||
| 
2939.2.1
by Ian Clatworthy
 use 'knitpack' naming instead of 'experimental' for pack formats  | 
3285  | 
# Pack-based formats. There is one format for pre-subtrees, and one for
 | 
3286  | 
# post-subtrees to allow ease of testing.
 | 
|
| 
3152.2.1
by Robert Collins
 * A new repository format 'development' has been added. This format will  | 
3287  | 
# NOTE: These are experimental in 0.92. Stable in 1.0 and above
 | 
| 
2592.3.22
by Robert Collins
 Add new experimental repository formats.  | 
3288  | 
format_registry.register_lazy(  | 
| 
2939.2.6
by Ian Clatworthy
 more review feedback from lifeless and poolie  | 
3289  | 
'Bazaar pack repository format 1 (needs bzr 0.92)\n',  | 
| 
2592.3.88
by Robert Collins
 Move Pack repository logic to bzrlib.repofmt.pack_repo.  | 
3290  | 
'bzrlib.repofmt.pack_repo',  | 
| 
2592.3.224
by Martin Pool
 Rename GraphKnitRepository etc to KnitPackRepository  | 
3291  | 
'RepositoryFormatKnitPack1',  | 
| 
2592.3.22
by Robert Collins
 Add new experimental repository formats.  | 
3292  | 
    )
 | 
3293  | 
format_registry.register_lazy(  | 
|
| 
2939.2.6
by Ian Clatworthy
 more review feedback from lifeless and poolie  | 
3294  | 
'Bazaar pack repository format 1 with subtree support (needs bzr 0.92)\n',  | 
| 
2592.3.88
by Robert Collins
 Move Pack repository logic to bzrlib.repofmt.pack_repo.  | 
3295  | 
'bzrlib.repofmt.pack_repo',  | 
| 
2592.3.224
by Martin Pool
 Rename GraphKnitRepository etc to KnitPackRepository  | 
3296  | 
'RepositoryFormatKnitPack3',  | 
| 
2592.3.22
by Robert Collins
 Add new experimental repository formats.  | 
3297  | 
    )
 | 
| 
2996.2.11
by Aaron Bentley
 Implement rich-root-pack format ( #164639)  | 
3298  | 
format_registry.register_lazy(  | 
3299  | 
'Bazaar pack repository format 1 with rich root (needs bzr 1.0)\n',  | 
|
3300  | 
'bzrlib.repofmt.pack_repo',  | 
|
3301  | 
'RepositoryFormatKnitPack4',  | 
|
3302  | 
    )
 | 
|
| 
3549.1.5
by Martin Pool
 Add stable format names for stacked branches  | 
3303  | 
format_registry.register_lazy(  | 
3304  | 
'Bazaar RepositoryFormatKnitPack5 (bzr 1.6)\n',  | 
|
3305  | 
'bzrlib.repofmt.pack_repo',  | 
|
3306  | 
'RepositoryFormatKnitPack5',  | 
|
3307  | 
    )
 | 
|
3308  | 
format_registry.register_lazy(  | 
|
| 
3606.10.1
by John Arbash Meinel
 Create a new --1.6-rich-root, deprecate the old one.  | 
3309  | 
'Bazaar RepositoryFormatKnitPack5RichRoot (bzr 1.6.1)\n',  | 
3310  | 
'bzrlib.repofmt.pack_repo',  | 
|
3311  | 
'RepositoryFormatKnitPack5RichRoot',  | 
|
3312  | 
    )
 | 
|
3313  | 
format_registry.register_lazy(  | 
|
| 
3549.1.6
by Martin Pool
 Change stacked-subtree to stacked-rich-root  | 
3314  | 
'Bazaar RepositoryFormatKnitPack5RichRoot (bzr 1.6)\n',  | 
| 
3549.1.5
by Martin Pool
 Add stable format names for stacked branches  | 
3315  | 
'bzrlib.repofmt.pack_repo',  | 
| 
3606.10.1
by John Arbash Meinel
 Create a new --1.6-rich-root, deprecate the old one.  | 
3316  | 
'RepositoryFormatKnitPack5RichRootBroken',  | 
| 
3549.1.5
by Martin Pool
 Add stable format names for stacked branches  | 
3317  | 
    )
 | 
| 
3805.3.1
by John Arbash Meinel
 Add repository 1.9 format, and update the documentation.  | 
3318  | 
format_registry.register_lazy(  | 
3319  | 
'Bazaar RepositoryFormatKnitPack6 (bzr 1.9)\n',  | 
|
3320  | 
'bzrlib.repofmt.pack_repo',  | 
|
3321  | 
'RepositoryFormatKnitPack6',  | 
|
3322  | 
    )
 | 
|
3323  | 
format_registry.register_lazy(  | 
|
3324  | 
'Bazaar RepositoryFormatKnitPack6RichRoot (bzr 1.9)\n',  | 
|
3325  | 
'bzrlib.repofmt.pack_repo',  | 
|
3326  | 
'RepositoryFormatKnitPack6RichRoot',  | 
|
3327  | 
    )
 | 
|
| 
3549.1.5
by Martin Pool
 Add stable format names for stacked branches  | 
3328  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3329  | 
# Development formats.
 | 
| 
4241.6.8
by Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil
 Add --development6-rich-root, disabling the legacy and unneeded development2 format, and activating the tests for CHK features disabled pending this format. (Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil)  | 
3330  | 
# Obsolete but kept pending a CHK based subtree format.
 | 
| 
3735.1.1
by Robert Collins
 Add development2 formats using BTree indices.  | 
3331  | 
format_registry.register_lazy(  | 
3332  | 
("Bazaar development format 2 with subtree support "  | 
|
3333  | 
"(needs bzr.dev from before 1.8)\n"),  | 
|
3334  | 
'bzrlib.repofmt.pack_repo',  | 
|
3335  | 
'RepositoryFormatPackDevelopment2Subtree',  | 
|
3336  | 
    )
 | 
|
| 
2592.3.22
by Robert Collins
 Add new experimental repository formats.  | 
3337  | 
|
| 
4241.6.8
by Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil
 Add --development6-rich-root, disabling the legacy and unneeded development2 format, and activating the tests for CHK features disabled pending this format. (Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil)  | 
3338  | 
# 1.14->1.16 go below here
 | 
3339  | 
format_registry.register_lazy(  | 
|
3340  | 
    'Bazaar development format - group compression and chk inventory'
 | 
|
3341  | 
' (needs bzr.dev from 1.14)\n',  | 
|
3342  | 
'bzrlib.repofmt.groupcompress_repo',  | 
|
3343  | 
'RepositoryFormatCHK1',  | 
|
| 
3735.31.1
by John Arbash Meinel
 Bring the groupcompress plugin into the brisbane-core branch.  | 
3344  | 
    )
 | 
| 
4241.6.8
by Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil
 Add --development6-rich-root, disabling the legacy and unneeded development2 format, and activating the tests for CHK features disabled pending this format. (Robert Collins, John Arbash Meinel, Ian Clatworthy, Vincent Ladeuil)  | 
3345  | 
|
| 
4290.1.7
by Jelmer Vernooij
 Add development7-rich-root format that uses the RIO Serializer.  | 
3346  | 
format_registry.register_lazy(  | 
| 
4290.1.12
by Jelmer Vernooij
 Use bencode rather than rio in the new revision serialiszer.  | 
3347  | 
    'Bazaar development format - chk repository with bencode revision '
 | 
| 
4413.3.1
by Jelmer Vernooij
 Mention bzr 1.16 in the dev7 format description.  | 
3348  | 
'serialization (needs bzr.dev from 1.16)\n',  | 
| 
4290.1.7
by Jelmer Vernooij
 Add development7-rich-root format that uses the RIO Serializer.  | 
3349  | 
'bzrlib.repofmt.groupcompress_repo',  | 
3350  | 
'RepositoryFormatCHK2',  | 
|
3351  | 
    )
 | 
|
| 
4428.2.1
by Martin Pool
 Add 2a format  | 
3352  | 
format_registry.register_lazy(  | 
3353  | 
'Bazaar repository format 2a (needs bzr 1.16 or later)\n',  | 
|
3354  | 
'bzrlib.repofmt.groupcompress_repo',  | 
|
3355  | 
'RepositoryFormat2a',  | 
|
3356  | 
    )
 | 
|
| 
4290.1.7
by Jelmer Vernooij
 Add development7-rich-root format that uses the RIO Serializer.  | 
3357  | 
|
| 
1534.4.40
by Robert Collins
 Add RepositoryFormats and allow bzrdir.open or create _repository to be used.  | 
3358  | 
|
| 
1563.2.12
by Robert Collins
 Checkpointing: created InterObject to factor out common inter object worker code, added InterVersionedFile and tests to allow making join work between any versionedfile.  | 
3359  | 
class InterRepository(InterObject):  | 
| 
1534.1.27
by Robert Collins
 Start InterRepository with InterRepository.get.  | 
3360  | 
"""This class represents operations taking place between two repositories.  | 
3361  | 
||
| 
1534.1.33
by Robert Collins
 Move copy_content_into into InterRepository and InterWeaveRepo, and disable the default codepath test as we have optimised paths for all current combinations.  | 
3362  | 
    Its instances have methods like copy_content and fetch, and contain
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3363  | 
    references to the source and target repositories these operations can be
 | 
| 
1534.1.27
by Robert Collins
 Start InterRepository with InterRepository.get.  | 
3364  | 
    carried out on.
 | 
3365  | 
||
3366  | 
    Often we will provide convenience methods on 'repository' which carry out
 | 
|
3367  | 
    operations with another repository - they will always forward to
 | 
|
3368  | 
    InterRepository.get(other).method_name(parameters).
 | 
|
3369  | 
    """
 | 
|
3370  | 
||
| 
4144.2.1
by Andrew Bennetts
 Always batch revisions to ask of target when doing _walk_to_common_revisions, rather than special-casing in Inter*Remote*.  | 
3371  | 
_walk_to_common_revisions_batch_size = 50  | 
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3372  | 
_optimisers = []  | 
| 
1534.1.28
by Robert Collins
 Allow for optimised InterRepository selection.  | 
3373  | 
"""The available optimised InterRepository types."""  | 
3374  | 
||
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
3375  | 
    @needs_write_lock
 | 
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
3376  | 
def copy_content(self, revision_id=None):  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
3377  | 
"""Make a complete copy of the content in self into destination.  | 
3378  | 
||
3379  | 
        This is a destructive operation! Do not use it on existing
 | 
|
3380  | 
        repositories.
 | 
|
3381  | 
||
3382  | 
        :param revision_id: Only copy the content needed to construct
 | 
|
3383  | 
                            revision_id and its parents.
 | 
|
3384  | 
        """
 | 
|
3385  | 
try:  | 
|
3386  | 
self.target.set_make_working_trees(self.source.make_working_trees())  | 
|
3387  | 
except NotImplementedError:  | 
|
3388  | 
            pass
 | 
|
3389  | 
self.target.fetch(self.source, revision_id=revision_id)  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3390  | 
|
| 
4110.2.23
by Martin Pool
 blackbox hpss test should check repository was remotely locked  | 
3391  | 
    @needs_write_lock
 | 
| 
4070.9.2
by Andrew Bennetts
 Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.  | 
3392  | 
def fetch(self, revision_id=None, pb=None, find_ghosts=False,  | 
3393  | 
fetch_spec=None):  | 
|
| 
1534.1.31
by Robert Collins
 Deprecated fetch.fetch and fetch.greedy_fetch for branch.fetch, and move the Repository.fetch internals to InterRepo and InterWeaveRepo.  | 
3394  | 
"""Fetch the content required to construct revision_id.  | 
3395  | 
||
| 
1910.7.17
by Andrew Bennetts
 Various cosmetic changes.  | 
3396  | 
        The content is copied from self.source to self.target.
 | 
| 
1534.1.31
by Robert Collins
 Deprecated fetch.fetch and fetch.greedy_fetch for branch.fetch, and move the Repository.fetch internals to InterRepo and InterWeaveRepo.  | 
3397  | 
|
3398  | 
        :param revision_id: if None all content is copied, if NULL_REVISION no
 | 
|
3399  | 
                            content is copied.
 | 
|
| 
4961.2.8
by Martin Pool
 RepoFetcher no longer takes a pb  | 
3400  | 
        :param pb: ignored.
 | 
| 
4065.1.1
by Robert Collins
 Change the return value of fetch() to None.  | 
3401  | 
        :return: None.
 | 
| 
1534.1.31
by Robert Collins
 Deprecated fetch.fetch and fetch.greedy_fetch for branch.fetch, and move the Repository.fetch internals to InterRepo and InterWeaveRepo.  | 
3402  | 
        """
 | 
| 
4988.9.3
by Jelmer Vernooij
 Review feedback from Rob.  | 
3403  | 
ui.ui_factory.warn_experimental_format_fetch(self)  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
3404  | 
from bzrlib.fetch import RepoFetcher  | 
| 
4634.144.1
by Martin Pool
 Give the warning about cross-format fetches earlier on in fetch  | 
3405  | 
        # See <https://launchpad.net/bugs/456077> asking for a warning here
 | 
| 
4634.144.3
by Martin Pool
 Only give cross-format fetch warning when they're actually different  | 
3406  | 
if self.source._format.network_name() != self.target._format.network_name():  | 
| 
4634.144.8
by Martin Pool
 Generalize to ui_factory.show_user_warning  | 
3407  | 
ui.ui_factory.show_user_warning('cross_format_fetch',  | 
3408  | 
from_format=self.source._format,  | 
|
3409  | 
to_format=self.target._format)  | 
|
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
3410  | 
f = RepoFetcher(to_repository=self.target,  | 
3411  | 
from_repository=self.source,  | 
|
3412  | 
last_revision=revision_id,  | 
|
| 
4070.9.2
by Andrew Bennetts
 Rough prototype of allowing a SearchResult to be passed to fetch, and using that to improve network conversations.  | 
3413  | 
fetch_spec=fetch_spec,  | 
| 
4961.2.8
by Martin Pool
 RepoFetcher no longer takes a pb  | 
3414  | 
find_ghosts=find_ghosts)  | 
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3415  | 
|
3416  | 
def _walk_to_common_revisions(self, revision_ids):  | 
|
3417  | 
"""Walk out from revision_ids in source to revisions target has.  | 
|
3418  | 
||
3419  | 
        :param revision_ids: The start point for the search.
 | 
|
3420  | 
        :return: A set of revision ids.
 | 
|
3421  | 
        """
 | 
|
| 
4144.3.12
by Andrew Bennetts
 Remove target_get_graph and target_get_parent_map attributes from InterRepository; nothing overrides them anymore.  | 
3422  | 
target_graph = self.target.get_graph()  | 
| 
1551.19.41
by Aaron Bentley
 Accelerate no-op pull  | 
3423  | 
revision_ids = frozenset(revision_ids)  | 
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3424  | 
missing_revs = set()  | 
| 
1551.19.41
by Aaron Bentley
 Accelerate no-op pull  | 
3425  | 
source_graph = self.source.get_graph()  | 
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3426  | 
        # ensure we don't pay silly lookup costs.
 | 
| 
1551.19.41
by Aaron Bentley
 Accelerate no-op pull  | 
3427  | 
searcher = source_graph._make_breadth_first_searcher(revision_ids)  | 
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3428  | 
null_set = frozenset([_mod_revision.NULL_REVISION])  | 
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3429  | 
searcher_exhausted = False  | 
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3430  | 
while True:  | 
| 
3452.2.6
by Andrew Bennetts
 Batch get_parent_map calls in InterPackToRemotePack._walk_to_common_revisions to  | 
3431  | 
next_revs = set()  | 
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3432  | 
ghosts = set()  | 
3433  | 
            # Iterate the searcher until we have enough next_revs
 | 
|
| 
3452.2.6
by Andrew Bennetts
 Batch get_parent_map calls in InterPackToRemotePack._walk_to_common_revisions to  | 
3434  | 
while len(next_revs) < self._walk_to_common_revisions_batch_size:  | 
3435  | 
try:  | 
|
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3436  | 
next_revs_part, ghosts_part = searcher.next_with_ghosts()  | 
| 
3452.2.6
by Andrew Bennetts
 Batch get_parent_map calls in InterPackToRemotePack._walk_to_common_revisions to  | 
3437  | 
next_revs.update(next_revs_part)  | 
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3438  | 
ghosts.update(ghosts_part)  | 
| 
3452.2.6
by Andrew Bennetts
 Batch get_parent_map calls in InterPackToRemotePack._walk_to_common_revisions to  | 
3439  | 
except StopIteration:  | 
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3440  | 
searcher_exhausted = True  | 
| 
3452.2.6
by Andrew Bennetts
 Batch get_parent_map calls in InterPackToRemotePack._walk_to_common_revisions to  | 
3441  | 
                    break
 | 
| 
3731.4.3
by Andrew Bennetts
 Rework ghost checking in _walk_to_common_revisions.  | 
3442  | 
            # If there are ghosts in the source graph, and the caller asked for
 | 
3443  | 
            # them, make sure that they are present in the target.
 | 
|
| 
3731.4.5
by Andrew Bennetts
 Clarify the code slightly.  | 
3444  | 
            # We don't care about other ghosts as we can't fetch them and
 | 
3445  | 
            # haven't been asked to.
 | 
|
3446  | 
ghosts_to_check = set(revision_ids.intersection(ghosts))  | 
|
3447  | 
revs_to_get = set(next_revs).union(ghosts_to_check)  | 
|
3448  | 
if revs_to_get:  | 
|
3449  | 
have_revs = set(target_graph.get_parent_map(revs_to_get))  | 
|
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3450  | 
                # we always have NULL_REVISION present.
 | 
| 
3731.4.5
by Andrew Bennetts
 Clarify the code slightly.  | 
3451  | 
have_revs = have_revs.union(null_set)  | 
3452  | 
                # Check if the target is missing any ghosts we need.
 | 
|
| 
3731.4.3
by Andrew Bennetts
 Rework ghost checking in _walk_to_common_revisions.  | 
3453  | 
ghosts_to_check.difference_update(have_revs)  | 
3454  | 
if ghosts_to_check:  | 
|
3455  | 
                    # One of the caller's revision_ids is a ghost in both the
 | 
|
3456  | 
                    # source and the target.
 | 
|
3457  | 
raise errors.NoSuchRevision(  | 
|
3458  | 
self.source, ghosts_to_check.pop())  | 
|
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3459  | 
missing_revs.update(next_revs - have_revs)  | 
| 
3808.1.4
by John Arbash Meinel
 make _walk_to_common responsible for stopping ancestors  | 
3460  | 
                # Because we may have walked past the original stop point, make
 | 
3461  | 
                # sure everything is stopped
 | 
|
3462  | 
stop_revs = searcher.find_seen_ancestors(have_revs)  | 
|
3463  | 
searcher.stop_searching_any(stop_revs)  | 
|
| 
3731.4.2
by Andrew Bennetts
 Move ghost check out of the inner loop.  | 
3464  | 
if searcher_exhausted:  | 
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3465  | 
                break
 | 
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3466  | 
return searcher.get_result()  | 
| 
3808.1.4
by John Arbash Meinel
 make _walk_to_common responsible for stopping ancestors  | 
3467  | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3468  | 
    @needs_read_lock
 | 
3469  | 
def search_missing_revision_ids(self, revision_id=None, find_ghosts=True):  | 
|
3470  | 
"""Return the revision ids that source has that target does not.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3471  | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3472  | 
        :param revision_id: only return revision ids included by this
 | 
3473  | 
                            revision_id.
 | 
|
3474  | 
        :param find_ghosts: If True find missing revisions in deep history
 | 
|
3475  | 
            rather than just finding the surface difference.
 | 
|
3476  | 
        :return: A bzrlib.graph.SearchResult.
 | 
|
3477  | 
        """
 | 
|
| 
3172.4.1
by Robert Collins
 * Fetching via bzr+ssh will no longer fill ghosts by default (this is  | 
3478  | 
        # stop searching at found target revisions.
 | 
3479  | 
if not find_ghosts and revision_id is not None:  | 
|
| 
3172.4.4
by Robert Collins
 Review feedback.  | 
3480  | 
return self._walk_to_common_revisions([revision_id])  | 
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3481  | 
        # generic, possibly worst case, slow code path.
 | 
3482  | 
target_ids = set(self.target.all_revision_ids())  | 
|
3483  | 
if revision_id is not None:  | 
|
3484  | 
source_ids = self.source.get_ancestry(revision_id)  | 
|
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
3485  | 
if source_ids[0] is not None:  | 
3486  | 
raise AssertionError()  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3487  | 
source_ids.pop(0)  | 
3488  | 
else:  | 
|
3489  | 
source_ids = self.source.all_revision_ids()  | 
|
3490  | 
result_set = set(source_ids).difference(target_ids)  | 
|
| 
3184.1.9
by Robert Collins
 * ``Repository.get_data_stream`` is now deprecated in favour of  | 
3491  | 
return self.source.revision_ids_to_search_result(result_set)  | 
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3492  | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3493  | 
    @staticmethod
 | 
3494  | 
def _same_model(source, target):  | 
|
| 
3582.1.2
by Martin Pool
 Default InterRepository.fetch raises IncompatibleRepositories  | 
3495  | 
"""True if source and target have the same data representation.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3496  | 
|
| 
3582.1.2
by Martin Pool
 Default InterRepository.fetch raises IncompatibleRepositories  | 
3497  | 
        Note: this is always called on the base class; overriding it in a
 | 
3498  | 
        subclass will have no effect.
 | 
|
3499  | 
        """
 | 
|
3500  | 
try:  | 
|
3501  | 
InterRepository._assert_same_model(source, target)  | 
|
3502  | 
return True  | 
|
3503  | 
except errors.IncompatibleRepositories, e:  | 
|
3504  | 
return False  | 
|
3505  | 
||
3506  | 
    @staticmethod
 | 
|
3507  | 
def _assert_same_model(source, target):  | 
|
3508  | 
"""Raise an exception if two repositories do not use the same model.  | 
|
3509  | 
        """
 | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3510  | 
if source.supports_rich_root() != target.supports_rich_root():  | 
| 
3582.1.2
by Martin Pool
 Default InterRepository.fetch raises IncompatibleRepositories  | 
3511  | 
raise errors.IncompatibleRepositories(source, target,  | 
3512  | 
"different rich-root support")  | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3513  | 
if source._serializer != target._serializer:  | 
| 
3582.1.2
by Martin Pool
 Default InterRepository.fetch raises IncompatibleRepositories  | 
3514  | 
raise errors.IncompatibleRepositories(source, target,  | 
3515  | 
"different serializers")  | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3516  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3517  | 
|
3518  | 
class InterSameDataRepository(InterRepository):  | 
|
3519  | 
"""Code for converting between repositories that represent the same data.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3520  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3521  | 
    Data format and model must match for this to work.
 | 
3522  | 
    """
 | 
|
3523  | 
||
| 
2241.1.6
by Martin Pool
 Move Knit repositories into the submodule bzrlib.repofmt.knitrepo and  | 
3524  | 
    @classmethod
 | 
| 
2241.1.7
by Martin Pool
 rename method  | 
3525  | 
def _get_repo_format_to_test(self):  | 
| 
2814.1.1
by Robert Collins
 * Pushing, pulling and branching branches with subtree references was not  | 
3526  | 
"""Repository format for testing with.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3527  | 
|
| 
2814.1.1
by Robert Collins
 * Pushing, pulling and branching branches with subtree references was not  | 
3528  | 
        InterSameData can pull from subtree to subtree and from non-subtree to
 | 
3529  | 
        non-subtree, so we test this with the richest repository format.
 | 
|
3530  | 
        """
 | 
|
3531  | 
from bzrlib.repofmt import knitrepo  | 
|
3532  | 
return knitrepo.RepositoryFormatKnit3()  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3533  | 
|
| 
1910.2.14
by Aaron Bentley
 Fail when trying to use interrepository on Knit2 and Knit1  | 
3534  | 
    @staticmethod
 | 
3535  | 
def is_compatible(source, target):  | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3536  | 
return InterRepository._same_model(source, target)  | 
| 
1910.2.14
by Aaron Bentley
 Fail when trying to use interrepository on Knit2 and Knit1  | 
3537  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3538  | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3539  | 
class InterWeaveRepo(InterSameDataRepository):  | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
3540  | 
"""Optimised code paths between Weave based repositories.  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3541  | 
|
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
3542  | 
    This should be in bzrlib/repofmt/weaverepo.py but we have not yet
 | 
3543  | 
    implemented lazy inter-object optimisation.
 | 
|
3544  | 
    """
 | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3545  | 
|
| 
2241.1.13
by Martin Pool
 Re-register InterWeaveRepo, fix test integration, add test for it  | 
3546  | 
    @classmethod
 | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3547  | 
def _get_repo_format_to_test(self):  | 
3548  | 
from bzrlib.repofmt import weaverepo  | 
|
3549  | 
return weaverepo.RepositoryFormat7()  | 
|
3550  | 
||
3551  | 
    @staticmethod
 | 
|
3552  | 
def is_compatible(source, target):  | 
|
3553  | 
"""Be compatible with known Weave formats.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3554  | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3555  | 
        We don't test for the stores being of specific types because that
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3556  | 
        could lead to confusing results, and there is no need to be
 | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3557  | 
        overly general.
 | 
3558  | 
        """
 | 
|
3559  | 
from bzrlib.repofmt.weaverepo import (  | 
|
3560  | 
RepositoryFormat5,  | 
|
3561  | 
RepositoryFormat6,  | 
|
3562  | 
RepositoryFormat7,  | 
|
3563  | 
                )
 | 
|
3564  | 
try:  | 
|
3565  | 
return (isinstance(source._format, (RepositoryFormat5,  | 
|
3566  | 
RepositoryFormat6,  | 
|
3567  | 
RepositoryFormat7)) and  | 
|
3568  | 
isinstance(target._format, (RepositoryFormat5,  | 
|
3569  | 
RepositoryFormat6,  | 
|
3570  | 
RepositoryFormat7)))  | 
|
3571  | 
except AttributeError:  | 
|
3572  | 
return False  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3573  | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3574  | 
    @needs_write_lock
 | 
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
3575  | 
def copy_content(self, revision_id=None):  | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3576  | 
"""See InterRepository.copy_content()."""  | 
3577  | 
        # weave specific optimised path:
 | 
|
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
3578  | 
try:  | 
3579  | 
self.target.set_make_working_trees(self.source.make_working_trees())  | 
|
| 
3349.1.2
by Aaron Bentley
 Change ValueError to RepositoryUpgradeRequired  | 
3580  | 
except (errors.RepositoryUpgradeRequired, NotImplemented):  | 
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
3581  | 
            pass
 | 
3582  | 
        # FIXME do not peek!
 | 
|
| 
3407.2.14
by Martin Pool
 Remove more cases of getting transport via control_files  | 
3583  | 
if self.source._transport.listable():  | 
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
3584  | 
pb = ui.ui_factory.nested_progress_bar()  | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3585  | 
try:  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
3586  | 
self.target.texts.insert_record_stream(  | 
3587  | 
self.source.texts.get_record_stream(  | 
|
3588  | 
self.source.texts.keys(), 'topological', False))  | 
|
| 
4665.2.1
by Martin Pool
 Update some progress messages to the standard style  | 
3589  | 
pb.update('Copying inventory', 0, 1)  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
3590  | 
self.target.inventories.insert_record_stream(  | 
3591  | 
self.source.inventories.get_record_stream(  | 
|
3592  | 
self.source.inventories.keys(), 'topological', False))  | 
|
3593  | 
self.target.signatures.insert_record_stream(  | 
|
3594  | 
self.source.signatures.get_record_stream(  | 
|
3595  | 
self.source.signatures.keys(),  | 
|
3596  | 
'unordered', True))  | 
|
3597  | 
self.target.revisions.insert_record_stream(  | 
|
3598  | 
self.source.revisions.get_record_stream(  | 
|
3599  | 
self.source.revisions.keys(),  | 
|
3600  | 
'topological', True))  | 
|
| 
2387.1.1
by Robert Collins
 Remove the --basis parameter to clone etc. (Robert Collins)  | 
3601  | 
finally:  | 
3602  | 
pb.finished()  | 
|
3603  | 
else:  | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3604  | 
self.target.fetch(self.source, revision_id=revision_id)  | 
3605  | 
||
3606  | 
    @needs_read_lock
 | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3607  | 
def search_missing_revision_ids(self, revision_id=None, find_ghosts=True):  | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3608  | 
"""See InterRepository.missing_revision_ids()."""  | 
3609  | 
        # we want all revisions to satisfy revision_id in source.
 | 
|
3610  | 
        # but we don't want to stat every file here and there.
 | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3611  | 
        # we want then, all revisions other needs to satisfy revision_id
 | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3612  | 
        # checked, but not those that we have locally.
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3613  | 
        # so the first thing is to get a subset of the revisions to
 | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3614  | 
        # satisfy revision_id in source, and then eliminate those that
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3615  | 
        # we do already have.
 | 
| 
4031.3.1
by Frank Aspell
 Fixing various typos  | 
3616  | 
        # this is slow on high latency connection to self, but as this
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3617  | 
        # disk format scales terribly for push anyway due to rewriting
 | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3618  | 
        # inventory.weave, this is considered acceptable.
 | 
3619  | 
        # - RBC 20060209
 | 
|
3620  | 
if revision_id is not None:  | 
|
3621  | 
source_ids = self.source.get_ancestry(revision_id)  | 
|
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
3622  | 
if source_ids[0] is not None:  | 
3623  | 
raise AssertionError()  | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3624  | 
source_ids.pop(0)  | 
3625  | 
else:  | 
|
3626  | 
source_ids = self.source._all_possible_ids()  | 
|
3627  | 
source_ids_set = set(source_ids)  | 
|
3628  | 
        # source_ids is the worst possible case we may need to pull.
 | 
|
3629  | 
        # now we want to filter source_ids against what we actually
 | 
|
3630  | 
        # have in target, but don't try to check for existence where we know
 | 
|
3631  | 
        # we do not have a revision as that would be pointless.
 | 
|
3632  | 
target_ids = set(self.target._all_possible_ids())  | 
|
3633  | 
possibly_present_revisions = target_ids.intersection(source_ids_set)  | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3634  | 
actually_present_revisions = set(  | 
3635  | 
self.target._eliminate_revisions_not_present(possibly_present_revisions))  | 
|
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3636  | 
required_revisions = source_ids_set.difference(actually_present_revisions)  | 
3637  | 
if revision_id is not None:  | 
|
3638  | 
            # we used get_ancestry to determine source_ids then we are assured all
 | 
|
3639  | 
            # revisions referenced are present as they are installed in topological order.
 | 
|
3640  | 
            # and the tip revision was validated by get_ancestry.
 | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3641  | 
result_set = required_revisions  | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3642  | 
else:  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3643  | 
            # if we just grabbed the possibly available ids, then
 | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3644  | 
            # we only have an estimate of whats available and need to validate
 | 
3645  | 
            # that against the revision records.
 | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3646  | 
result_set = set(  | 
3647  | 
self.source._eliminate_revisions_not_present(required_revisions))  | 
|
| 
3184.1.9
by Robert Collins
 * ``Repository.get_data_stream`` is now deprecated in favour of  | 
3648  | 
return self.source.revision_ids_to_search_result(result_set)  | 
| 
2241.1.12
by Martin Pool
 Restore InterWeaveRepo  | 
3649  | 
|
3650  | 
||
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
3651  | 
class InterKnitRepo(InterSameDataRepository):  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3652  | 
"""Optimised code paths between Knit based repositories."""  | 
3653  | 
||
| 
2241.1.6
by Martin Pool
 Move Knit repositories into the submodule bzrlib.repofmt.knitrepo and  | 
3654  | 
    @classmethod
 | 
| 
2241.1.7
by Martin Pool
 rename method  | 
3655  | 
def _get_repo_format_to_test(self):  | 
| 
2241.1.6
by Martin Pool
 Move Knit repositories into the submodule bzrlib.repofmt.knitrepo and  | 
3656  | 
from bzrlib.repofmt import knitrepo  | 
3657  | 
return knitrepo.RepositoryFormatKnit1()  | 
|
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3658  | 
|
3659  | 
    @staticmethod
 | 
|
3660  | 
def is_compatible(source, target):  | 
|
3661  | 
"""Be compatible with known Knit formats.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3662  | 
|
| 
1759.2.2
by Jelmer Vernooij
 Revert some of my spelling fixes and fix some typos after review by Aaron.  | 
3663  | 
        We don't test for the stores being of specific types because that
 | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3664  | 
        could lead to confusing results, and there is no need to be
 | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3665  | 
        overly general.
 | 
3666  | 
        """
 | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3667  | 
from bzrlib.repofmt.knitrepo import RepositoryFormatKnit  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3668  | 
try:  | 
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3669  | 
are_knits = (isinstance(source._format, RepositoryFormatKnit) and  | 
3670  | 
isinstance(target._format, RepositoryFormatKnit))  | 
|
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3671  | 
except AttributeError:  | 
3672  | 
return False  | 
|
| 
2592.3.28
by Robert Collins
 Make InterKnitOptimiser be used between any same-model knit repository.  | 
3673  | 
return are_knits and InterRepository._same_model(source, target)  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3674  | 
|
3675  | 
    @needs_read_lock
 | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3676  | 
def search_missing_revision_ids(self, revision_id=None, find_ghosts=True):  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3677  | 
"""See InterRepository.missing_revision_ids()."""  | 
3678  | 
if revision_id is not None:  | 
|
3679  | 
source_ids = self.source.get_ancestry(revision_id)  | 
|
| 
3376.2.4
by Martin Pool
 Remove every assert statement from bzrlib!  | 
3680  | 
if source_ids[0] is not None:  | 
3681  | 
raise AssertionError()  | 
|
| 
1668.1.14
by Martin Pool
 merge olaf - InvalidRevisionId fixes  | 
3682  | 
source_ids.pop(0)  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3683  | 
else:  | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
3684  | 
source_ids = self.source.all_revision_ids()  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3685  | 
source_ids_set = set(source_ids)  | 
3686  | 
        # source_ids is the worst possible case we may need to pull.
 | 
|
3687  | 
        # now we want to filter source_ids against what we actually
 | 
|
| 
1759.2.2
by Jelmer Vernooij
 Revert some of my spelling fixes and fix some typos after review by Aaron.  | 
3688  | 
        # have in target, but don't try to check for existence where we know
 | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3689  | 
        # we do not have a revision as that would be pointless.
 | 
| 
2850.3.1
by Robert Collins
 Move various weave specific code out of the base Repository class to weaverepo.py.  | 
3690  | 
target_ids = set(self.target.all_revision_ids())  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3691  | 
possibly_present_revisions = target_ids.intersection(source_ids_set)  | 
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3692  | 
actually_present_revisions = set(  | 
3693  | 
self.target._eliminate_revisions_not_present(possibly_present_revisions))  | 
|
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3694  | 
required_revisions = source_ids_set.difference(actually_present_revisions)  | 
3695  | 
if revision_id is not None:  | 
|
3696  | 
            # we used get_ancestry to determine source_ids then we are assured all
 | 
|
3697  | 
            # revisions referenced are present as they are installed in topological order.
 | 
|
3698  | 
            # and the tip revision was validated by get_ancestry.
 | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3699  | 
result_set = required_revisions  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3700  | 
else:  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
3701  | 
            # if we just grabbed the possibly available ids, then
 | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3702  | 
            # we only have an estimate of whats available and need to validate
 | 
3703  | 
            # that against the revision records.
 | 
|
| 
3184.1.8
by Robert Collins
 * ``InterRepository.missing_revision_ids`` is now deprecated in favour of  | 
3704  | 
result_set = set(  | 
3705  | 
self.source._eliminate_revisions_not_present(required_revisions))  | 
|
| 
3184.1.9
by Robert Collins
 * ``Repository.get_data_stream`` is now deprecated in favour of  | 
3706  | 
return self.source.revision_ids_to_search_result(result_set)  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
3707  | 
|
| 
1910.2.17
by Aaron Bentley
 Get fetching from 1 to 2 under test  | 
3708  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3709  | 
class InterDifferingSerializer(InterRepository):  | 
3710  | 
||
3711  | 
    @classmethod
 | 
|
3712  | 
def _get_repo_format_to_test(self):  | 
|
3713  | 
return None  | 
|
3714  | 
||
3715  | 
    @staticmethod
 | 
|
3716  | 
def is_compatible(source, target):  | 
|
3717  | 
"""Be compatible with Knit2 source and Knit3 target"""  | 
|
3718  | 
        # This is redundant with format.check_conversion_target(), however that
 | 
|
3719  | 
        # raises an exception, and we just want to say "False" as in we won't
 | 
|
3720  | 
        # support converting between these formats.
 | 
|
| 
4476.3.82
by Andrew Bennetts
 Mention another bug fix in NEWS, and update verb name, comments, and NEWS additions for landing on 1.19 rather than 1.18.  | 
3721  | 
if 'IDS_never' in debug.debug_flags:  | 
| 
4476.3.55
by Andrew Bennetts
 Remove irrelevant XXX, reinstate InterDifferingSerializer, add some debug flags.  | 
3722  | 
return False  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3723  | 
if source.supports_rich_root() and not target.supports_rich_root():  | 
3724  | 
return False  | 
|
3725  | 
if (source._format.supports_tree_reference  | 
|
3726  | 
and not target._format.supports_tree_reference):  | 
|
3727  | 
return False  | 
|
| 
4476.3.63
by Andrew Bennetts
 Disable InterDifferingSerializer when the target is a stacked 2a repo, because it fails to copy the necessary chk_bytes when it copies the parent inventories.  | 
3728  | 
if target._fallback_repositories and target._format.supports_chks:  | 
3729  | 
            # IDS doesn't know how to copy CHKs for the parent inventories it
 | 
|
3730  | 
            # adds to stacked repos.
 | 
|
3731  | 
return False  | 
|
| 
4476.3.82
by Andrew Bennetts
 Mention another bug fix in NEWS, and update verb name, comments, and NEWS additions for landing on 1.19 rather than 1.18.  | 
3732  | 
if 'IDS_always' in debug.debug_flags:  | 
| 
4476.3.55
by Andrew Bennetts
 Remove irrelevant XXX, reinstate InterDifferingSerializer, add some debug flags.  | 
3733  | 
return True  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3734  | 
        # Only use this code path for local source and target.  IDS does far
 | 
3735  | 
        # too much IO (both bandwidth and roundtrips) over a network.
 | 
|
| 
4476.3.43
by Andrew Bennetts
 Just use transport.base rather than .external_url() to check for local transports.  | 
3736  | 
if not source.bzrdir.transport.base.startswith('file:///'):  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3737  | 
return False  | 
| 
4476.3.43
by Andrew Bennetts
 Just use transport.base rather than .external_url() to check for local transports.  | 
3738  | 
if not target.bzrdir.transport.base.startswith('file:///'):  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3739  | 
return False  | 
3740  | 
return True  | 
|
3741  | 
||
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3742  | 
def _get_trees(self, revision_ids, cache):  | 
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3743  | 
possible_trees = []  | 
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3744  | 
for rev_id in revision_ids:  | 
3745  | 
if rev_id in cache:  | 
|
3746  | 
possible_trees.append((rev_id, cache[rev_id]))  | 
|
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3747  | 
else:  | 
3748  | 
                # Not cached, but inventory might be present anyway.
 | 
|
3749  | 
try:  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3750  | 
tree = self.source.revision_tree(rev_id)  | 
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3751  | 
except errors.NoSuchRevision:  | 
3752  | 
                    # Nope, parent is ghost.
 | 
|
3753  | 
                    pass
 | 
|
3754  | 
else:  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3755  | 
cache[rev_id] = tree  | 
3756  | 
possible_trees.append((rev_id, tree))  | 
|
3757  | 
return possible_trees  | 
|
3758  | 
||
3759  | 
def _get_delta_for_revision(self, tree, parent_ids, possible_trees):  | 
|
3760  | 
"""Get the best delta and base for this revision.  | 
|
3761  | 
||
3762  | 
        :return: (basis_id, delta)
 | 
|
3763  | 
        """
 | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3764  | 
deltas = []  | 
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3765  | 
        # Generate deltas against each tree, to find the shortest.
 | 
3766  | 
texts_possibly_new_in_tree = set()  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3767  | 
for basis_id, basis_tree in possible_trees:  | 
3768  | 
delta = tree.inventory._make_delta(basis_tree.inventory)  | 
|
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3769  | 
for old_path, new_path, file_id, new_entry in delta:  | 
3770  | 
if new_path is None:  | 
|
3771  | 
                    # This file_id isn't present in the new rev, so we don't
 | 
|
3772  | 
                    # care about it.
 | 
|
3773  | 
                    continue
 | 
|
3774  | 
if not new_path:  | 
|
3775  | 
                    # Rich roots are handled elsewhere...
 | 
|
3776  | 
                    continue
 | 
|
3777  | 
kind = new_entry.kind  | 
|
3778  | 
if kind != 'directory' and kind != 'file':  | 
|
3779  | 
                    # No text record associated with this inventory entry.
 | 
|
3780  | 
                    continue
 | 
|
3781  | 
                # This is a directory or file that has changed somehow.
 | 
|
3782  | 
texts_possibly_new_in_tree.add((file_id, new_entry.revision))  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3783  | 
deltas.append((len(delta), basis_id, delta))  | 
3784  | 
deltas.sort()  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3785  | 
return deltas[0][1:]  | 
3786  | 
||
3787  | 
def _fetch_parent_invs_for_stacking(self, parent_map, cache):  | 
|
3788  | 
"""Find all parent revisions that are absent, but for which the  | 
|
3789  | 
        inventory is present, and copy those inventories.
 | 
|
3790  | 
||
3791  | 
        This is necessary to preserve correctness when the source is stacked
 | 
|
3792  | 
        without fallbacks configured.  (Note that in cases like upgrade the
 | 
|
3793  | 
        source may be not have _fallback_repositories even though it is
 | 
|
3794  | 
        stacked.)
 | 
|
| 
4634.7.1
by Robert Collins
 Merge and cherrypick outstanding 2.0 relevant patches from bzr.dev: Up to rev  | 
3795  | 
        """
 | 
| 
4627.3.1
by Andrew Bennetts
 Make IDS fetch present parent invs even when the corresponding revs are absent.  | 
3796  | 
parent_revs = set()  | 
3797  | 
for parents in parent_map.values():  | 
|
3798  | 
parent_revs.update(parents)  | 
|
3799  | 
present_parents = self.source.get_parent_map(parent_revs)  | 
|
3800  | 
absent_parents = set(parent_revs).difference(present_parents)  | 
|
3801  | 
parent_invs_keys_for_stacking = self.source.inventories.get_parent_map(  | 
|
3802  | 
(rev_id,) for rev_id in absent_parents)  | 
|
3803  | 
parent_inv_ids = [key[-1] for key in parent_invs_keys_for_stacking]  | 
|
3804  | 
for parent_tree in self.source.revision_trees(parent_inv_ids):  | 
|
3805  | 
current_revision_id = parent_tree.get_revision_id()  | 
|
3806  | 
parents_parents_keys = parent_invs_keys_for_stacking[  | 
|
3807  | 
(current_revision_id,)]  | 
|
3808  | 
parents_parents = [key[-1] for key in parents_parents_keys]  | 
|
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3809  | 
basis_id = _mod_revision.NULL_REVISION  | 
3810  | 
basis_tree = self.source.revision_tree(basis_id)  | 
|
3811  | 
delta = parent_tree.inventory._make_delta(basis_tree.inventory)  | 
|
| 
4627.3.1
by Andrew Bennetts
 Make IDS fetch present parent invs even when the corresponding revs are absent.  | 
3812  | 
self.target.add_inventory_by_delta(  | 
3813  | 
basis_id, delta, current_revision_id, parents_parents)  | 
|
3814  | 
cache[current_revision_id] = parent_tree  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3815  | 
|
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
3816  | 
def _fetch_batch(self, revision_ids, basis_id, cache, a_graph=None):  | 
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3817  | 
"""Fetch across a few revisions.  | 
3818  | 
||
3819  | 
        :param revision_ids: The revisions to copy
 | 
|
3820  | 
        :param basis_id: The revision_id of a tree that must be in cache, used
 | 
|
3821  | 
            as a basis for delta when no other base is available
 | 
|
3822  | 
        :param cache: A cache of RevisionTrees that we can use.
 | 
|
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
3823  | 
        :param a_graph: A Graph object to determine the heads() of the
 | 
3824  | 
            rich-root data stream.
 | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3825  | 
        :return: The revision_id of the last converted tree. The RevisionTree
 | 
3826  | 
            for it will be in cache
 | 
|
3827  | 
        """
 | 
|
3828  | 
        # Walk though all revisions; get inventory deltas, copy referenced
 | 
|
3829  | 
        # texts that delta references, insert the delta, revision and
 | 
|
3830  | 
        # signature.
 | 
|
3831  | 
root_keys_to_create = set()  | 
|
3832  | 
text_keys = set()  | 
|
3833  | 
pending_deltas = []  | 
|
3834  | 
pending_revisions = []  | 
|
3835  | 
parent_map = self.source.get_parent_map(revision_ids)  | 
|
3836  | 
self._fetch_parent_invs_for_stacking(parent_map, cache)  | 
|
| 
4849.4.2
by John Arbash Meinel
 Change from being a per-serializer attribute to being a per-repo attribute.  | 
3837  | 
self.source._safe_to_return_from_cache = True  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3838  | 
for tree in self.source.revision_trees(revision_ids):  | 
| 
4627.3.2
by Andrew Bennetts
 Hackish fix for bug #399140. Can probably be faster and cleaner. Needs test.  | 
3839  | 
            # Find a inventory delta for this revision.
 | 
3840  | 
            # Find text entries that need to be copied, too.
 | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3841  | 
current_revision_id = tree.get_revision_id()  | 
3842  | 
parent_ids = parent_map.get(current_revision_id, ())  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3843  | 
parent_trees = self._get_trees(parent_ids, cache)  | 
3844  | 
possible_trees = list(parent_trees)  | 
|
3845  | 
if len(possible_trees) == 0:  | 
|
3846  | 
                # There either aren't any parents, or the parents are ghosts,
 | 
|
3847  | 
                # so just use the last converted tree.
 | 
|
3848  | 
possible_trees.append((basis_id, cache[basis_id]))  | 
|
3849  | 
basis_id, delta = self._get_delta_for_revision(tree, parent_ids,  | 
|
3850  | 
possible_trees)  | 
|
| 
4634.17.1
by Robert Collins
 revno 4639 in bzr.dev introduced a bug in the conversion logic for 'IDS'.  | 
3851  | 
revision = self.source.get_revision(current_revision_id)  | 
3852  | 
pending_deltas.append((basis_id, delta,  | 
|
3853  | 
current_revision_id, revision.parent_ids))  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3854  | 
if self._converting_to_rich_root:  | 
3855  | 
self._revision_id_to_root_id[current_revision_id] = \  | 
|
3856  | 
tree.get_root_id()  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3857  | 
            # Determine which texts are in present in this revision but not in
 | 
3858  | 
            # any of the available parents.
 | 
|
3859  | 
texts_possibly_new_in_tree = set()  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3860  | 
for old_path, new_path, file_id, entry in delta:  | 
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3861  | 
if new_path is None:  | 
3862  | 
                    # This file_id isn't present in the new rev
 | 
|
3863  | 
                    continue
 | 
|
3864  | 
if not new_path:  | 
|
3865  | 
                    # This is the root
 | 
|
3866  | 
if not self.target.supports_rich_root():  | 
|
3867  | 
                        # The target doesn't support rich root, so we don't
 | 
|
3868  | 
                        # copy
 | 
|
3869  | 
                        continue
 | 
|
3870  | 
if self._converting_to_rich_root:  | 
|
3871  | 
                        # This can't be copied normally, we have to insert
 | 
|
3872  | 
                        # it specially
 | 
|
3873  | 
root_keys_to_create.add((file_id, entry.revision))  | 
|
3874  | 
                        continue
 | 
|
3875  | 
kind = entry.kind  | 
|
3876  | 
texts_possibly_new_in_tree.add((file_id, entry.revision))  | 
|
3877  | 
for basis_id, basis_tree in possible_trees:  | 
|
3878  | 
basis_inv = basis_tree.inventory  | 
|
3879  | 
for file_key in list(texts_possibly_new_in_tree):  | 
|
3880  | 
file_id, file_revision = file_key  | 
|
3881  | 
try:  | 
|
3882  | 
entry = basis_inv[file_id]  | 
|
3883  | 
except errors.NoSuchId:  | 
|
3884  | 
                        continue
 | 
|
3885  | 
if entry.revision == file_revision:  | 
|
3886  | 
texts_possibly_new_in_tree.remove(file_key)  | 
|
3887  | 
text_keys.update(texts_possibly_new_in_tree)  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3888  | 
pending_revisions.append(revision)  | 
3889  | 
cache[current_revision_id] = tree  | 
|
3890  | 
basis_id = current_revision_id  | 
|
| 
4849.4.2
by John Arbash Meinel
 Change from being a per-serializer attribute to being a per-repo attribute.  | 
3891  | 
self.source._safe_to_return_from_cache = False  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3892  | 
        # Copy file texts
 | 
3893  | 
from_texts = self.source.texts  | 
|
3894  | 
to_texts = self.target.texts  | 
|
3895  | 
if root_keys_to_create:  | 
|
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
3896  | 
root_stream = _mod_fetch._new_root_data_stream(  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3897  | 
root_keys_to_create, self._revision_id_to_root_id, parent_map,  | 
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
3898  | 
self.source, graph=a_graph)  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3899  | 
to_texts.insert_record_stream(root_stream)  | 
3900  | 
to_texts.insert_record_stream(from_texts.get_record_stream(  | 
|
3901  | 
text_keys, self.target._format._fetch_order,  | 
|
3902  | 
not self.target._format._fetch_uses_deltas))  | 
|
3903  | 
        # insert inventory deltas
 | 
|
3904  | 
for delta in pending_deltas:  | 
|
3905  | 
self.target.add_inventory_by_delta(*delta)  | 
|
3906  | 
if self.target._fallback_repositories:  | 
|
3907  | 
            # Make sure this stacked repository has all the parent inventories
 | 
|
3908  | 
            # for the new revisions that we are about to insert.  We do this
 | 
|
3909  | 
            # before adding the revisions so that no revision is added until
 | 
|
3910  | 
            # all the inventories it may depend on are added.
 | 
|
| 
4597.1.2
by John Arbash Meinel
 Fix the second half of bug #402778  | 
3911  | 
            # Note that this is overzealous, as we may have fetched these in an
 | 
3912  | 
            # earlier batch.
 | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3913  | 
parent_ids = set()  | 
3914  | 
revision_ids = set()  | 
|
3915  | 
for revision in pending_revisions:  | 
|
3916  | 
revision_ids.add(revision.revision_id)  | 
|
3917  | 
parent_ids.update(revision.parent_ids)  | 
|
3918  | 
parent_ids.difference_update(revision_ids)  | 
|
3919  | 
parent_ids.discard(_mod_revision.NULL_REVISION)  | 
|
3920  | 
parent_map = self.source.get_parent_map(parent_ids)  | 
|
| 
4597.1.2
by John Arbash Meinel
 Fix the second half of bug #402778  | 
3921  | 
            # we iterate over parent_map and not parent_ids because we don't
 | 
3922  | 
            # want to try copying any revision which is a ghost
 | 
|
| 
4597.1.9
by John Arbash Meinel
 remove the .keys() call that Robert remarked about.  | 
3923  | 
for parent_tree in self.source.revision_trees(parent_map):  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3924  | 
current_revision_id = parent_tree.get_revision_id()  | 
3925  | 
parents_parents = parent_map[current_revision_id]  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
3926  | 
possible_trees = self._get_trees(parents_parents, cache)  | 
3927  | 
if len(possible_trees) == 0:  | 
|
3928  | 
                    # There either aren't any parents, or the parents are
 | 
|
3929  | 
                    # ghosts, so just use the last converted tree.
 | 
|
3930  | 
possible_trees.append((basis_id, cache[basis_id]))  | 
|
3931  | 
basis_id, delta = self._get_delta_for_revision(parent_tree,  | 
|
3932  | 
parents_parents, possible_trees)  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3933  | 
self.target.add_inventory_by_delta(  | 
3934  | 
basis_id, delta, current_revision_id, parents_parents)  | 
|
3935  | 
        # insert signatures and revisions
 | 
|
3936  | 
for revision in pending_revisions:  | 
|
3937  | 
try:  | 
|
3938  | 
signature = self.source.get_signature_text(  | 
|
3939  | 
revision.revision_id)  | 
|
3940  | 
self.target.add_signature_text(revision.revision_id,  | 
|
3941  | 
signature)  | 
|
3942  | 
except errors.NoSuchRevision:  | 
|
3943  | 
                pass
 | 
|
3944  | 
self.target.add_revision(revision.revision_id, revision)  | 
|
3945  | 
return basis_id  | 
|
3946  | 
||
3947  | 
def _fetch_all_revisions(self, revision_ids, pb):  | 
|
3948  | 
"""Fetch everything for the list of revisions.  | 
|
3949  | 
||
3950  | 
        :param revision_ids: The list of revisions to fetch. Must be in
 | 
|
3951  | 
            topological order.
 | 
|
| 
4463.1.1
by Martin Pool
 Update docstrings for recent progress changes  | 
3952  | 
        :param pb: A ProgressTask
 | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3953  | 
        :return: None
 | 
3954  | 
        """
 | 
|
3955  | 
basis_id, basis_tree = self._get_basis(revision_ids[0])  | 
|
3956  | 
batch_size = 100  | 
|
3957  | 
cache = lru_cache.LRUCache(100)  | 
|
3958  | 
cache[basis_id] = basis_tree  | 
|
3959  | 
del basis_tree # We don't want to hang on to it here  | 
|
3960  | 
hints = []  | 
|
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
3961  | 
if self._converting_to_rich_root and len(revision_ids) > 100:  | 
3962  | 
a_graph = _mod_fetch._get_rich_root_heads_graph(self.source,  | 
|
3963  | 
revision_ids)  | 
|
| 
4819.2.2
by John Arbash Meinel
 Use a KnownGraph to implement the heads searches.  | 
3964  | 
else:  | 
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
3965  | 
a_graph = None  | 
| 
4819.2.2
by John Arbash Meinel
 Use a KnownGraph to implement the heads searches.  | 
3966  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3967  | 
for offset in range(0, len(revision_ids), batch_size):  | 
3968  | 
self.target.start_write_group()  | 
|
3969  | 
try:  | 
|
3970  | 
pb.update('Transferring revisions', offset,  | 
|
3971  | 
len(revision_ids))  | 
|
3972  | 
batch = revision_ids[offset:offset+batch_size]  | 
|
| 
4819.2.2
by John Arbash Meinel
 Use a KnownGraph to implement the heads searches.  | 
3973  | 
basis_id = self._fetch_batch(batch, basis_id, cache,  | 
| 
4819.2.5
by John Arbash Meinel
 Some small typo fixes.  | 
3974  | 
a_graph=a_graph)  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3975  | 
except:  | 
| 
4849.4.2
by John Arbash Meinel
 Change from being a per-serializer attribute to being a per-repo attribute.  | 
3976  | 
self.source._safe_to_return_from_cache = False  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3977  | 
self.target.abort_write_group()  | 
3978  | 
                raise
 | 
|
3979  | 
else:  | 
|
3980  | 
hint = self.target.commit_write_group()  | 
|
3981  | 
if hint:  | 
|
3982  | 
hints.extend(hint)  | 
|
3983  | 
if hints and self.target._format.pack_compresses:  | 
|
3984  | 
self.target.pack(hint=hints)  | 
|
3985  | 
pb.update('Transferring revisions', len(revision_ids),  | 
|
3986  | 
len(revision_ids))  | 
|
3987  | 
||
3988  | 
    @needs_write_lock
 | 
|
3989  | 
def fetch(self, revision_id=None, pb=None, find_ghosts=False,  | 
|
3990  | 
fetch_spec=None):  | 
|
3991  | 
"""See InterRepository.fetch()."""  | 
|
3992  | 
if fetch_spec is not None:  | 
|
3993  | 
raise AssertionError("Not implemented yet...")  | 
|
| 
5117.1.1
by Martin Pool
 merge 2.1.1, including fetch format warning, back to trunk  | 
3994  | 
ui.ui_factory.warn_experimental_format_fetch(self)  | 
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
3995  | 
if (not self.source.supports_rich_root()  | 
3996  | 
and self.target.supports_rich_root()):  | 
|
3997  | 
self._converting_to_rich_root = True  | 
|
3998  | 
self._revision_id_to_root_id = {}  | 
|
3999  | 
else:  | 
|
4000  | 
self._converting_to_rich_root = False  | 
|
| 
4634.144.7
by Martin Pool
 Also show conversion warning for InterDifferingSerializer  | 
4001  | 
        # See <https://launchpad.net/bugs/456077> asking for a warning here
 | 
4002  | 
if self.source._format.network_name() != self.target._format.network_name():  | 
|
| 
4634.144.8
by Martin Pool
 Generalize to ui_factory.show_user_warning  | 
4003  | 
ui.ui_factory.show_user_warning('cross_format_fetch',  | 
4004  | 
from_format=self.source._format,  | 
|
4005  | 
to_format=self.target._format)  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
4006  | 
revision_ids = self.target.search_missing_revision_ids(self.source,  | 
4007  | 
revision_id, find_ghosts=find_ghosts).get_keys()  | 
|
4008  | 
if not revision_ids:  | 
|
4009  | 
return 0, 0  | 
|
4010  | 
revision_ids = tsort.topo_sort(  | 
|
4011  | 
self.source.get_graph().get_parent_map(revision_ids))  | 
|
4012  | 
if not revision_ids:  | 
|
4013  | 
return 0, 0  | 
|
4014  | 
        # Walk though all revisions; get inventory deltas, copy referenced
 | 
|
4015  | 
        # texts that delta references, insert the delta, revision and
 | 
|
4016  | 
        # signature.
 | 
|
4017  | 
if pb is None:  | 
|
4018  | 
my_pb = ui.ui_factory.nested_progress_bar()  | 
|
4019  | 
pb = my_pb  | 
|
4020  | 
else:  | 
|
4021  | 
symbol_versioning.warn(  | 
|
4022  | 
symbol_versioning.deprecated_in((1, 14, 0))  | 
|
4023  | 
% "pb parameter to fetch()")  | 
|
4024  | 
my_pb = None  | 
|
4025  | 
try:  | 
|
4026  | 
self._fetch_all_revisions(revision_ids, pb)  | 
|
4027  | 
finally:  | 
|
4028  | 
if my_pb is not None:  | 
|
4029  | 
my_pb.finished()  | 
|
4030  | 
return len(revision_ids), 0  | 
|
4031  | 
||
4032  | 
def _get_basis(self, first_revision_id):  | 
|
4033  | 
"""Get a revision and tree which exists in the target.  | 
|
4034  | 
||
4035  | 
        This assumes that first_revision_id is selected for transmission
 | 
|
4036  | 
        because all other ancestors are already present. If we can't find an
 | 
|
4037  | 
        ancestor we fall back to NULL_REVISION since we know that is safe.
 | 
|
4038  | 
||
4039  | 
        :return: (basis_id, basis_tree)
 | 
|
4040  | 
        """
 | 
|
4041  | 
first_rev = self.source.get_revision(first_revision_id)  | 
|
4042  | 
try:  | 
|
4043  | 
basis_id = first_rev.parent_ids[0]  | 
|
| 
4627.3.3
by Andrew Bennetts
 Simplify and tidy.  | 
4044  | 
            # only valid as a basis if the target has it
 | 
4045  | 
self.target.get_revision(basis_id)  | 
|
| 
4476.3.42
by Andrew Bennetts
 Restore InterDifferingSerializer, but only for local source & target.  | 
4046  | 
            # Try to get a basis tree - if its a ghost it will hit the
 | 
4047  | 
            # NoSuchRevision case.
 | 
|
4048  | 
basis_tree = self.source.revision_tree(basis_id)  | 
|
4049  | 
except (IndexError, errors.NoSuchRevision):  | 
|
4050  | 
basis_id = _mod_revision.NULL_REVISION  | 
|
4051  | 
basis_tree = self.source.revision_tree(basis_id)  | 
|
4052  | 
return basis_id, basis_tree  | 
|
4053  | 
||
4054  | 
||
4055  | 
InterRepository.register_optimiser(InterDifferingSerializer)  | 
|
| 
1910.2.15
by Aaron Bentley
 Back out inter.get changes, make optimizers an ordered list  | 
4056  | 
InterRepository.register_optimiser(InterSameDataRepository)  | 
| 
2241.1.13
by Martin Pool
 Re-register InterWeaveRepo, fix test integration, add test for it  | 
4057  | 
InterRepository.register_optimiser(InterWeaveRepo)  | 
| 
1563.2.31
by Robert Collins
 Convert Knit repositories to use knits.  | 
4058  | 
InterRepository.register_optimiser(InterKnitRepo)  | 
| 
1534.1.31
by Robert Collins
 Deprecated fetch.fetch and fetch.greedy_fetch for branch.fetch, and move the Repository.fetch internals to InterRepo and InterWeaveRepo.  | 
4059  | 
|
4060  | 
||
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4061  | 
class CopyConverter(object):  | 
4062  | 
"""A repository conversion tool which just performs a copy of the content.  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
4063  | 
|
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4064  | 
    This is slow but quite reliable.
 | 
4065  | 
    """
 | 
|
4066  | 
||
4067  | 
def __init__(self, target_format):  | 
|
4068  | 
"""Create a CopyConverter.  | 
|
4069  | 
||
4070  | 
        :param target_format: The format the resulting repository should be.
 | 
|
4071  | 
        """
 | 
|
4072  | 
self.target_format = target_format  | 
|
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
4073  | 
|
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4074  | 
def convert(self, repo, pb):  | 
4075  | 
"""Perform the conversion of to_convert, giving feedback via pb.  | 
|
4076  | 
||
4077  | 
        :param to_convert: The disk object to convert.
 | 
|
4078  | 
        :param pb: a progress bar to use for progress information.
 | 
|
4079  | 
        """
 | 
|
| 
4961.2.14
by Martin Pool
 Further pb cleanups  | 
4080  | 
pb = ui.ui_factory.nested_progress_bar()  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4081  | 
self.count = 0  | 
| 
1596.2.22
by Robert Collins
 Fetch changes to use new pb.  | 
4082  | 
self.total = 4  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4083  | 
        # this is only useful with metadir layouts - separated repo content.
 | 
4084  | 
        # trigger an assertion if not such
 | 
|
4085  | 
repo._format.get_format_string()  | 
|
4086  | 
self.repo_dir = repo.bzrdir  | 
|
| 
4961.2.14
by Martin Pool
 Further pb cleanups  | 
4087  | 
pb.update('Moving repository to repository.backup')  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4088  | 
self.repo_dir.transport.move('repository', 'repository.backup')  | 
4089  | 
backup_transport = self.repo_dir.transport.clone('repository.backup')  | 
|
| 
1910.2.12
by Aaron Bentley
 Implement knit repo format 2  | 
4090  | 
repo._format.check_conversion_target(self.target_format)  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4091  | 
self.source_repo = repo._format.open(self.repo_dir,  | 
4092  | 
_found=True,  | 
|
4093  | 
_override_transport=backup_transport)  | 
|
| 
4961.2.14
by Martin Pool
 Further pb cleanups  | 
4094  | 
pb.update('Creating new repository')  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4095  | 
converted = self.target_format.initialize(self.repo_dir,  | 
4096  | 
self.source_repo.is_shared())  | 
|
4097  | 
converted.lock_write()  | 
|
4098  | 
try:  | 
|
| 
4961.2.14
by Martin Pool
 Further pb cleanups  | 
4099  | 
pb.update('Copying content')  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4100  | 
self.source_repo.copy_content_into(converted)  | 
4101  | 
finally:  | 
|
4102  | 
converted.unlock()  | 
|
| 
4961.2.14
by Martin Pool
 Further pb cleanups  | 
4103  | 
pb.update('Deleting old repository content')  | 
| 
1556.1.4
by Robert Collins
 Add a new format for what will become knit, and the surrounding logic to upgrade repositories within metadirs, and tests for the same.  | 
4104  | 
self.repo_dir.transport.delete_tree('repository.backup')  | 
| 
4471.2.2
by Martin Pool
 Deprecate ProgressTask.note  | 
4105  | 
ui.ui_factory.note('repository converted')  | 
| 
4961.2.14
by Martin Pool
 Further pb cleanups  | 
4106  | 
pb.finished()  | 
| 
1596.1.1
by Martin Pool
 Use simple xml unescaping rather than importing xml.sax  | 
4107  | 
|
4108  | 
||
| 
1843.2.4
by Aaron Bentley
 Switch to John Meinel's _unescape_xml implementation  | 
4109  | 
_unescape_map = {  | 
4110  | 
'apos':"'",  | 
|
4111  | 
'quot':'"',  | 
|
4112  | 
'amp':'&',  | 
|
4113  | 
'lt':'<',  | 
|
4114  | 
'gt':'>'  | 
|
4115  | 
}
 | 
|
4116  | 
||
4117  | 
||
4118  | 
def _unescaper(match, _map=_unescape_map):  | 
|
| 
2294.1.2
by John Arbash Meinel
 Track down and add tests that all tree.commit() can handle  | 
4119  | 
code = match.group(1)  | 
4120  | 
try:  | 
|
4121  | 
return _map[code]  | 
|
4122  | 
except KeyError:  | 
|
4123  | 
if not code.startswith('#'):  | 
|
4124  | 
            raise
 | 
|
| 
2294.1.10
by John Arbash Meinel
 Switch all apis over to utf8 file ids. All tests pass  | 
4125  | 
return unichr(int(code[1:])).encode('utf8')  | 
| 
1843.2.4
by Aaron Bentley
 Switch to John Meinel's _unescape_xml implementation  | 
4126  | 
|
4127  | 
||
4128  | 
_unescape_re = None  | 
|
4129  | 
||
4130  | 
||
| 
1596.1.1
by Martin Pool
 Use simple xml unescaping rather than importing xml.sax  | 
4131  | 
def _unescape_xml(data):  | 
| 
1843.2.4
by Aaron Bentley
 Switch to John Meinel's _unescape_xml implementation  | 
4132  | 
"""Unescape predefined XML entities in a string of data."""  | 
4133  | 
global _unescape_re  | 
|
4134  | 
if _unescape_re is None:  | 
|
| 
2120.2.1
by John Arbash Meinel
 Remove tabs from source files, and add a test to keep it that way.  | 
4135  | 
_unescape_re = re.compile('\&([^;]*);')  | 
| 
1843.2.4
by Aaron Bentley
 Switch to John Meinel's _unescape_xml implementation  | 
4136  | 
return _unescape_re.sub(_unescaper, data)  | 
| 
2745.6.3
by Aaron Bentley
 Implement versionedfile checking for bzr check  | 
4137  | 
|
4138  | 
||
| 
3036.1.3
by Robert Collins
 Privatise VersionedFileChecker.  | 
4139  | 
class _VersionedFileChecker(object):  | 
| 
2745.6.47
by Andrew Bennetts
 Move check_parents out of VersionedFile.  | 
4140  | 
|
| 
4332.3.15
by Robert Collins
 Keep an ancestors dict in check rather than recreating one multiple times.  | 
4141  | 
def __init__(self, repository, text_key_references=None, ancestors=None):  | 
| 
2745.6.47
by Andrew Bennetts
 Move check_parents out of VersionedFile.  | 
4142  | 
self.repository = repository  | 
| 
4145.2.1
by Ian Clatworthy
 faster check  | 
4143  | 
self.text_index = self.repository._generate_text_key_index(  | 
| 
4332.3.15
by Robert Collins
 Keep an ancestors dict in check rather than recreating one multiple times.  | 
4144  | 
text_key_references=text_key_references, ancestors=ancestors)  | 
| 
3943.8.1
by Marius Kruger
 remove all trailing whitespace from bzr source  | 
4145  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4146  | 
def calculate_file_version_parents(self, text_key):  | 
| 
2927.2.10
by Andrew Bennetts
 More docstrings, elaborate a comment with an XXX, and remove a little bit of cruft.  | 
4147  | 
"""Calculate the correct parents for a file version according to  | 
4148  | 
        the inventories.
 | 
|
4149  | 
        """
 | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4150  | 
parent_keys = self.text_index[text_key]  | 
| 
2988.1.8
by Robert Collins
 Change check and reconcile to use the new _generate_text_key_index rather  | 
4151  | 
if parent_keys == [_mod_revision.NULL_REVISION]:  | 
4152  | 
return ()  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4153  | 
return tuple(parent_keys)  | 
| 
2745.6.47
by Andrew Bennetts
 Move check_parents out of VersionedFile.  | 
4154  | 
|
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4155  | 
def check_file_version_parents(self, texts, progress_bar=None):  | 
| 
2927.2.10
by Andrew Bennetts
 More docstrings, elaborate a comment with an XXX, and remove a little bit of cruft.  | 
4156  | 
"""Check the parents stored in a versioned file are correct.  | 
4157  | 
||
4158  | 
        It also detects file versions that are not referenced by their
 | 
|
4159  | 
        corresponding revision's inventory.
 | 
|
4160  | 
||
| 
2927.2.14
by Andrew Bennetts
 Tweaks suggested by review.  | 
4161  | 
        :returns: A tuple of (wrong_parents, dangling_file_versions).
 | 
| 
2927.2.10
by Andrew Bennetts
 More docstrings, elaborate a comment with an XXX, and remove a little bit of cruft.  | 
4162  | 
            wrong_parents is a dict mapping {revision_id: (stored_parents,
 | 
4163  | 
            correct_parents)} for each revision_id where the stored parents
 | 
|
| 
2927.2.14
by Andrew Bennetts
 Tweaks suggested by review.  | 
4164  | 
            are not correct.  dangling_file_versions is a set of (file_id,
 | 
4165  | 
            revision_id) tuples for versions that are present in this versioned
 | 
|
4166  | 
            file, but not used by the corresponding inventory.
 | 
|
| 
2927.2.10
by Andrew Bennetts
 More docstrings, elaborate a comment with an XXX, and remove a little bit of cruft.  | 
4167  | 
        """
 | 
| 
4332.3.19
by Robert Collins
 Fix the versioned files checker check_file_version_parents to handle no progress bar being supplied.  | 
4168  | 
local_progress = None  | 
4169  | 
if progress_bar is None:  | 
|
4170  | 
local_progress = ui.ui_factory.nested_progress_bar()  | 
|
4171  | 
progress_bar = local_progress  | 
|
4172  | 
try:  | 
|
4173  | 
return self._check_file_version_parents(texts, progress_bar)  | 
|
4174  | 
finally:  | 
|
4175  | 
if local_progress:  | 
|
4176  | 
local_progress.finished()  | 
|
4177  | 
||
4178  | 
def _check_file_version_parents(self, texts, progress_bar):  | 
|
4179  | 
"""See check_file_version_parents."""  | 
|
| 
2927.2.3
by Andrew Bennetts
 Add fulltexts to avoid bug 155730.  | 
4180  | 
wrong_parents = {}  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4181  | 
self.file_ids = set([file_id for file_id, _ in  | 
4182  | 
self.text_index.iterkeys()])  | 
|
4183  | 
        # text keys is now grouped by file_id
 | 
|
4184  | 
n_versions = len(self.text_index)  | 
|
4185  | 
progress_bar.update('loading text store', 0, n_versions)  | 
|
4186  | 
parent_map = self.repository.texts.get_parent_map(self.text_index)  | 
|
4187  | 
        # On unlistable transports this could well be empty/error...
 | 
|
4188  | 
text_keys = self.repository.texts.keys()  | 
|
4189  | 
unused_keys = frozenset(text_keys) - set(self.text_index)  | 
|
4190  | 
for num, key in enumerate(self.text_index.iterkeys()):  | 
|
| 
4332.3.19
by Robert Collins
 Fix the versioned files checker check_file_version_parents to handle no progress bar being supplied.  | 
4191  | 
progress_bar.update('checking text graph', num, n_versions)  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4192  | 
correct_parents = self.calculate_file_version_parents(key)  | 
| 
2927.2.6
by Andrew Bennetts
 Make some more check tests pass.  | 
4193  | 
try:  | 
| 
3350.6.4
by Robert Collins
 First cut at pluralised VersionedFiles. Some rather massive API incompatabilities, primarily because of the difficulty of coherence among competing stores.  | 
4194  | 
knit_parents = parent_map[key]  | 
4195  | 
except errors.RevisionNotPresent:  | 
|
4196  | 
                # Missing text!
 | 
|
4197  | 
knit_parents = None  | 
|
4198  | 
if correct_parents != knit_parents:  | 
|
4199  | 
wrong_parents[key] = (knit_parents, correct_parents)  | 
|
4200  | 
return wrong_parents, unused_keys  | 
|
| 
3287.6.8
by Robert Collins
 Reduce code duplication as per review.  | 
4201  | 
|
4202  | 
||
4203  | 
def _old_get_graph(repository, revision_id):  | 
|
4204  | 
"""DO NOT USE. That is all. I'm serious."""  | 
|
4205  | 
graph = repository.get_graph()  | 
|
4206  | 
revision_graph = dict(((key, value) for key, value in  | 
|
4207  | 
graph.iter_ancestry([revision_id]) if value is not None))  | 
|
4208  | 
return _strip_NULL_ghosts(revision_graph)  | 
|
4209  | 
||
4210  | 
||
4211  | 
def _strip_NULL_ghosts(revision_graph):  | 
|
4212  | 
"""Also don't use this. more compatibility code for unmigrated clients."""  | 
|
4213  | 
    # Filter ghosts, and null:
 | 
|
4214  | 
if _mod_revision.NULL_REVISION in revision_graph:  | 
|
4215  | 
del revision_graph[_mod_revision.NULL_REVISION]  | 
|
4216  | 
for key, parents in revision_graph.items():  | 
|
4217  | 
revision_graph[key] = tuple(parent for parent in parents if parent  | 
|
4218  | 
in revision_graph)  | 
|
4219  | 
return revision_graph  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4220  | 
|
4221  | 
||
4222  | 
class StreamSink(object):  | 
|
4223  | 
"""An object that can insert a stream into a repository.  | 
|
4224  | 
||
4225  | 
    This interface handles the complexity of reserialising inventories and
 | 
|
4226  | 
    revisions from different formats, and allows unidirectional insertion into
 | 
|
4227  | 
    stacked repositories without looking for the missing basis parents
 | 
|
4228  | 
    beforehand.
 | 
|
4229  | 
    """
 | 
|
4230  | 
||
4231  | 
def __init__(self, target_repo):  | 
|
4232  | 
self.target_repo = target_repo  | 
|
4233  | 
||
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4234  | 
def insert_stream(self, stream, src_format, resume_tokens):  | 
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4235  | 
"""Insert a stream's content into the target repository.  | 
4236  | 
||
4237  | 
        :param src_format: a bzr repository format.
 | 
|
4238  | 
||
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4239  | 
        :return: a list of resume tokens and an  iterable of keys additional
 | 
4240  | 
            items required before the insertion can be completed.
 | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4241  | 
        """
 | 
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4242  | 
self.target_repo.lock_write()  | 
4243  | 
try:  | 
|
4244  | 
if resume_tokens:  | 
|
4245  | 
self.target_repo.resume_write_group(resume_tokens)  | 
|
| 
4343.3.30
by John Arbash Meinel
 Add tests that when resuming a write group, we start checking if  | 
4246  | 
is_resume = True  | 
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4247  | 
else:  | 
4248  | 
self.target_repo.start_write_group()  | 
|
| 
4343.3.30
by John Arbash Meinel
 Add tests that when resuming a write group, we start checking if  | 
4249  | 
is_resume = False  | 
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4250  | 
try:  | 
4251  | 
                # locked_insert_stream performs a commit|suspend.
 | 
|
| 
4343.3.30
by John Arbash Meinel
 Add tests that when resuming a write group, we start checking if  | 
4252  | 
return self._locked_insert_stream(stream, src_format, is_resume)  | 
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4253  | 
except:  | 
4254  | 
self.target_repo.abort_write_group(suppress_errors=True)  | 
|
4255  | 
                raise
 | 
|
4256  | 
finally:  | 
|
4257  | 
self.target_repo.unlock()  | 
|
4258  | 
||
| 
4343.3.30
by John Arbash Meinel
 Add tests that when resuming a write group, we start checking if  | 
4259  | 
def _locked_insert_stream(self, stream, src_format, is_resume):  | 
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4260  | 
to_serializer = self.target_repo._format._serializer  | 
4261  | 
src_serializer = src_format._serializer  | 
|
| 
4309.1.7
by Andrew Bennetts
 Fix bug found by acceptance test: we need to flush writes (if we are buffering them) before trying to determine the missing_keys in _locked_insert_stream.  | 
4262  | 
new_pack = None  | 
| 
4187.3.2
by Andrew Bennetts
 Only enable the hack when the serializers match, otherwise we cause ShortReadvErrors.  | 
4263  | 
if to_serializer == src_serializer:  | 
4264  | 
            # If serializers match and the target is a pack repository, set the
 | 
|
4265  | 
            # write cache size on the new pack.  This avoids poor performance
 | 
|
4266  | 
            # on transports where append is unbuffered (such as
 | 
|
| 
4187.3.4
by Andrew Bennetts
 Better docstrings and comments.  | 
4267  | 
            # RemoteTransport).  This is safe to do because nothing should read
 | 
| 
4187.3.2
by Andrew Bennetts
 Only enable the hack when the serializers match, otherwise we cause ShortReadvErrors.  | 
4268  | 
            # back from the target repository while a stream with matching
 | 
4269  | 
            # serialization is being inserted.
 | 
|
| 
4187.3.4
by Andrew Bennetts
 Better docstrings and comments.  | 
4270  | 
            # The exception is that a delta record from the source that should
 | 
4271  | 
            # be a fulltext may need to be expanded by the target (see
 | 
|
4272  | 
            # test_fetch_revisions_with_deltas_into_pack); but we take care to
 | 
|
4273  | 
            # explicitly flush any buffered writes first in that rare case.
 | 
|
| 
4187.3.2
by Andrew Bennetts
 Only enable the hack when the serializers match, otherwise we cause ShortReadvErrors.  | 
4274  | 
try:  | 
4275  | 
new_pack = self.target_repo._pack_collection._new_pack  | 
|
4276  | 
except AttributeError:  | 
|
4277  | 
                # Not a pack repository
 | 
|
4278  | 
                pass
 | 
|
4279  | 
else:  | 
|
4280  | 
new_pack.set_write_cache_size(1024*1024)  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4281  | 
for substream_type, substream in stream:  | 
| 
4476.3.58
by Andrew Bennetts
 Create an LRUCache of basis inventories in _extract_and_insert_inventory_deltas. Speeds up a 1.9->2a fetch of ~7000 bzr.dev revisions by >10%.  | 
4282  | 
if 'stream' in debug.debug_flags:  | 
4283  | 
mutter('inserting substream: %s', substream_type)  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4284  | 
if substream_type == 'texts':  | 
4285  | 
self.target_repo.texts.insert_record_stream(substream)  | 
|
4286  | 
elif substream_type == 'inventories':  | 
|
4287  | 
if src_serializer == to_serializer:  | 
|
4288  | 
self.target_repo.inventories.insert_record_stream(  | 
|
| 
4257.4.4
by Andrew Bennetts
 Remove some cruft.  | 
4289  | 
substream)  | 
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4290  | 
else:  | 
4291  | 
self._extract_and_insert_inventories(  | 
|
| 
4476.3.49
by Andrew Bennetts
 Start reworking inventory-delta streaming to use a separate substream.  | 
4292  | 
substream, src_serializer)  | 
4293  | 
elif substream_type == 'inventory-deltas':  | 
|
4294  | 
self._extract_and_insert_inventory_deltas(  | 
|
4295  | 
substream, src_serializer)  | 
|
| 
3735.2.98
by John Arbash Meinel
 Merge bzr.dev 4032. Resolve the new streaming fetch.  | 
4296  | 
elif substream_type == 'chk_bytes':  | 
4297  | 
                # XXX: This doesn't support conversions, as it assumes the
 | 
|
4298  | 
                #      conversion was done in the fetch code.
 | 
|
4299  | 
self.target_repo.chk_bytes.insert_record_stream(substream)  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4300  | 
elif substream_type == 'revisions':  | 
4301  | 
                # This may fallback to extract-and-insert more often than
 | 
|
4302  | 
                # required if the serializers are different only in terms of
 | 
|
4303  | 
                # the inventory.
 | 
|
4304  | 
if src_serializer == to_serializer:  | 
|
4305  | 
self.target_repo.revisions.insert_record_stream(  | 
|
4306  | 
substream)  | 
|
4307  | 
else:  | 
|
4308  | 
self._extract_and_insert_revisions(substream,  | 
|
4309  | 
src_serializer)  | 
|
4310  | 
elif substream_type == 'signatures':  | 
|
4311  | 
self.target_repo.signatures.insert_record_stream(substream)  | 
|
4312  | 
else:  | 
|
4313  | 
raise AssertionError('kaboom! %s' % (substream_type,))  | 
|
| 
4309.1.7
by Andrew Bennetts
 Fix bug found by acceptance test: we need to flush writes (if we are buffering them) before trying to determine the missing_keys in _locked_insert_stream.  | 
4314  | 
        # Done inserting data, and the missing_keys calculations will try to
 | 
4315  | 
        # read back from the inserted data, so flush the writes to the new pack
 | 
|
4316  | 
        # (if this is pack format).
 | 
|
4317  | 
if new_pack is not None:  | 
|
4318  | 
new_pack._write_data('', flush=True)  | 
|
| 
4257.4.3
by Andrew Bennetts
 SinkStream.insert_stream checks for missing parent inventories, and reports them as missing_keys.  | 
4319  | 
        # Find all the new revisions (including ones from resume_tokens)
 | 
| 
4343.3.30
by John Arbash Meinel
 Add tests that when resuming a write group, we start checking if  | 
4320  | 
missing_keys = self.target_repo.get_missing_parent_inventories(  | 
4321  | 
check_for_missing_texts=is_resume)  | 
|
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4322  | 
try:  | 
4323  | 
for prefix, versioned_file in (  | 
|
4324  | 
('texts', self.target_repo.texts),  | 
|
4325  | 
('inventories', self.target_repo.inventories),  | 
|
4326  | 
('revisions', self.target_repo.revisions),  | 
|
4327  | 
('signatures', self.target_repo.signatures),  | 
|
| 
4343.3.3
by John Arbash Meinel
 Be sure to check for missing compression parents in chk_bytes  | 
4328  | 
('chk_bytes', self.target_repo.chk_bytes),  | 
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4329  | 
                ):
 | 
| 
4343.3.3
by John Arbash Meinel
 Be sure to check for missing compression parents in chk_bytes  | 
4330  | 
if versioned_file is None:  | 
4331  | 
                    continue
 | 
|
| 
4679.8.8
by John Arbash Meinel
 I think I know where things are going wrong, at least with tuple concatenation.  | 
4332  | 
                # TODO: key is often going to be a StaticTuple object
 | 
4333  | 
                #       I don't believe we can define a method by which
 | 
|
4334  | 
                #       (prefix,) + StaticTuple will work, though we could
 | 
|
4335  | 
                #       define a StaticTuple.sq_concat that would allow you to
 | 
|
4336  | 
                #       pass in either a tuple or a StaticTuple as the second
 | 
|
4337  | 
                #       object, so instead we could have:
 | 
|
4338  | 
                #       StaticTuple(prefix) + key here...
 | 
|
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4339  | 
missing_keys.update((prefix,) + key for key in  | 
4340  | 
versioned_file.get_missing_compression_parent_keys())  | 
|
4341  | 
except NotImplementedError:  | 
|
4342  | 
            # cannot even attempt suspending, and missing would have failed
 | 
|
4343  | 
            # during stream insertion.
 | 
|
4344  | 
missing_keys = set()  | 
|
4345  | 
else:  | 
|
4346  | 
if missing_keys:  | 
|
4347  | 
                # suspend the write group and tell the caller what we is
 | 
|
4348  | 
                # missing. We know we can suspend or else we would not have
 | 
|
4349  | 
                # entered this code path. (All repositories that can handle
 | 
|
4350  | 
                # missing keys can handle suspending a write group).
 | 
|
4351  | 
write_group_tokens = self.target_repo.suspend_write_group()  | 
|
4352  | 
return write_group_tokens, missing_keys  | 
|
| 
4431.3.7
by Jonathan Lange
 Cherrypick bzr.dev 4470, resolving conflicts.  | 
4353  | 
hint = self.target_repo.commit_write_group()  | 
4354  | 
if (to_serializer != src_serializer and  | 
|
4355  | 
self.target_repo._format.pack_compresses):  | 
|
4356  | 
self.target_repo.pack(hint=hint)  | 
|
| 
4032.3.7
by Robert Collins
 Move write locking and write group responsibilities into the Sink objects themselves, allowing complete avoidance of unnecessary calls when the sink is a RemoteSink.  | 
4357  | 
return [], set()  | 
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4358  | 
|
| 
4476.3.49
by Andrew Bennetts
 Start reworking inventory-delta streaming to use a separate substream.  | 
4359  | 
def _extract_and_insert_inventory_deltas(self, substream, serializer):  | 
4360  | 
target_rich_root = self.target_repo._format.rich_root_data  | 
|
4361  | 
target_tree_refs = self.target_repo._format.supports_tree_reference  | 
|
4362  | 
for record in substream:  | 
|
4363  | 
            # Insert the delta directly
 | 
|
4364  | 
inventory_delta_bytes = record.get_bytes_as('fulltext')  | 
|
| 
4476.3.76
by Andrew Bennetts
 Split out InventoryDeltaDeserializer from InventoryDeltaSerializer.  | 
4365  | 
deserialiser = inventory_delta.InventoryDeltaDeserializer()  | 
| 
4476.3.77
by Andrew Bennetts
 Replace require_flags method with allow_versioned_root and allow_tree_references flags on InventoryDeltaSerializer.__init__, and shift some checking of delta compatibility from StreamSink to InventoryDeltaSerializer.  | 
4366  | 
try:  | 
4367  | 
parse_result = deserialiser.parse_text_bytes(  | 
|
4368  | 
inventory_delta_bytes)  | 
|
| 
4476.3.78
by Andrew Bennetts
 Raise InventoryDeltaErrors, not generic BzrErrors, from inventory_delta.py.  | 
4369  | 
except inventory_delta.IncompatibleInventoryDelta, err:  | 
| 
4476.3.77
by Andrew Bennetts
 Replace require_flags method with allow_versioned_root and allow_tree_references flags on InventoryDeltaSerializer.__init__, and shift some checking of delta compatibility from StreamSink to InventoryDeltaSerializer.  | 
4370  | 
trace.mutter("Incompatible delta: %s", err.msg)  | 
4371  | 
raise errors.IncompatibleRevision(self.target_repo._format)  | 
|
| 
4476.3.49
by Andrew Bennetts
 Start reworking inventory-delta streaming to use a separate substream.  | 
4372  | 
basis_id, new_id, rich_root, tree_refs, inv_delta = parse_result  | 
4373  | 
revision_id = new_id  | 
|
4374  | 
parents = [key[0] for key in record.parents]  | 
|
| 
4476.3.64
by Andrew Bennetts
 Remove inventory_cache from _extract_and_insert_inventory_deltas; it doesn't work with CHKInventories, and I don't trust my earlier measurement that it made a difference.  | 
4375  | 
self.target_repo.add_inventory_by_delta(  | 
4376  | 
basis_id, inv_delta, revision_id, parents)  | 
|
| 
4476.3.49
by Andrew Bennetts
 Start reworking inventory-delta streaming to use a separate substream.  | 
4377  | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
4378  | 
def _extract_and_insert_inventories(self, substream, serializer,  | 
4379  | 
parse_delta=None):  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4380  | 
"""Generate a new inventory versionedfile in target, converting data.  | 
| 
4032.1.1
by John Arbash Meinel
 Merge the removal of all trailing whitespace, and resolve conflicts.  | 
4381  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4382  | 
        The inventory is retrieved from the source, (deserializing it), and
 | 
4383  | 
        stored in the target (reserializing it in a different format).
 | 
|
4384  | 
        """
 | 
|
| 
4476.3.2
by Andrew Bennetts
 Make it possible for a StreamSink for a rich-root/tree-refs repo format to consume inventories without those features.  | 
4385  | 
target_rich_root = self.target_repo._format.rich_root_data  | 
4386  | 
target_tree_refs = self.target_repo._format.supports_tree_reference  | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4387  | 
for record in substream:  | 
| 
4476.3.3
by Andrew Bennetts
 Add some comments.  | 
4388  | 
            # It's not a delta, so it must be a fulltext in the source
 | 
4389  | 
            # serializer's format.
 | 
|
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4390  | 
bytes = record.get_bytes_as('fulltext')  | 
4391  | 
revision_id = record.key[0]  | 
|
4392  | 
inv = serializer.read_inventory_from_string(bytes, revision_id)  | 
|
4393  | 
parents = [key[0] for key in record.parents]  | 
|
4394  | 
self.target_repo.add_inventory(revision_id, inv, parents)  | 
|
| 
4476.3.2
by Andrew Bennetts
 Make it possible for a StreamSink for a rich-root/tree-refs repo format to consume inventories without those features.  | 
4395  | 
            # No need to keep holding this full inv in memory when the rest of
 | 
4396  | 
            # the substream is likely to be all deltas.
 | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
4397  | 
del inv  | 
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4398  | 
|
4399  | 
def _extract_and_insert_revisions(self, substream, serializer):  | 
|
4400  | 
for record in substream:  | 
|
4401  | 
bytes = record.get_bytes_as('fulltext')  | 
|
4402  | 
revision_id = record.key[0]  | 
|
4403  | 
rev = serializer.read_revision_from_string(bytes)  | 
|
4404  | 
if rev.revision_id != revision_id:  | 
|
4405  | 
raise AssertionError('wtf: %s != %s' % (rev, revision_id))  | 
|
4406  | 
self.target_repo.add_revision(revision_id, rev)  | 
|
4407  | 
||
4408  | 
def finished(self):  | 
|
| 
4053.1.4
by Robert Collins
 Move the fetch control attributes from Repository to RepositoryFormat.  | 
4409  | 
if self.target_repo._format._fetch_reconcile:  | 
| 
4022.1.1
by Robert Collins
 Refactoring of fetch to have a sender and sink component enabling splitting the logic over a network stream. (Robert Collins, Andrew Bennetts)  | 
4410  | 
self.target_repo.reconcile()  | 
4411  | 
||
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4412  | 
|
4413  | 
class StreamSource(object):  | 
|
| 
4065.1.2
by Robert Collins
 Merge bzr.dev [fix conflicts with fetch refactoring].  | 
4414  | 
"""A source of a stream for fetching between repositories."""  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4415  | 
|
4416  | 
def __init__(self, from_repository, to_format):  | 
|
4417  | 
"""Create a StreamSource streaming from from_repository."""  | 
|
4418  | 
self.from_repository = from_repository  | 
|
4419  | 
self.to_format = to_format  | 
|
4420  | 
||
4421  | 
def delta_on_metadata(self):  | 
|
4422  | 
"""Return True if delta's are permitted on metadata streams.  | 
|
4423  | 
||
4424  | 
        That is on revisions and signatures.
 | 
|
4425  | 
        """
 | 
|
4426  | 
src_serializer = self.from_repository._format._serializer  | 
|
4427  | 
target_serializer = self.to_format._serializer  | 
|
4428  | 
return (self.to_format._fetch_uses_deltas and  | 
|
4429  | 
src_serializer == target_serializer)  | 
|
4430  | 
||
4431  | 
def _fetch_revision_texts(self, revs):  | 
|
4432  | 
        # fetch signatures first and then the revision texts
 | 
|
4433  | 
        # may need to be a InterRevisionStore call here.
 | 
|
4434  | 
from_sf = self.from_repository.signatures  | 
|
4435  | 
        # A missing signature is just skipped.
 | 
|
4436  | 
keys = [(rev_id,) for rev_id in revs]  | 
|
| 
4060.1.4
by Robert Collins
 Streaming fetch from remote servers.  | 
4437  | 
signatures = versionedfile.filter_absent(from_sf.get_record_stream(  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4438  | 
keys,  | 
4439  | 
self.to_format._fetch_order,  | 
|
4440  | 
not self.to_format._fetch_uses_deltas))  | 
|
4441  | 
        # If a revision has a delta, this is actually expanded inside the
 | 
|
4442  | 
        # insert_record_stream code now, which is an alternate fix for
 | 
|
4443  | 
        # bug #261339
 | 
|
4444  | 
from_rf = self.from_repository.revisions  | 
|
4445  | 
revisions = from_rf.get_record_stream(  | 
|
4446  | 
keys,  | 
|
4447  | 
self.to_format._fetch_order,  | 
|
4448  | 
not self.delta_on_metadata())  | 
|
4449  | 
return [('signatures', signatures), ('revisions', revisions)]  | 
|
4450  | 
||
4451  | 
def _generate_root_texts(self, revs):  | 
|
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4452  | 
"""This will be called by get_stream between fetching weave texts and  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4453  | 
        fetching the inventory weave.
 | 
4454  | 
        """
 | 
|
4455  | 
if self._rich_root_upgrade():  | 
|
| 
4819.2.4
by John Arbash Meinel
 Factor out the common code into a helper so that smart streaming also benefits.  | 
4456  | 
return _mod_fetch.Inter1and2Helper(  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4457  | 
self.from_repository).generate_root_texts(revs)  | 
4458  | 
else:  | 
|
4459  | 
return []  | 
|
4460  | 
||
4461  | 
def get_stream(self, search):  | 
|
4462  | 
phase = 'file'  | 
|
4463  | 
revs = search.get_keys()  | 
|
4464  | 
graph = self.from_repository.get_graph()  | 
|
| 
4577.2.4
by Maarten Bosmans
 Make shure the faster topo_sort function is used where appropriate  | 
4465  | 
revs = tsort.topo_sort(graph.get_parent_map(revs))  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4466  | 
data_to_fetch = self.from_repository.item_keys_introduced_by(revs)  | 
4467  | 
text_keys = []  | 
|
4468  | 
for knit_kind, file_id, revisions in data_to_fetch:  | 
|
4469  | 
if knit_kind != phase:  | 
|
4470  | 
phase = knit_kind  | 
|
4471  | 
                # Make a new progress bar for this phase
 | 
|
4472  | 
if knit_kind == "file":  | 
|
4473  | 
                # Accumulate file texts
 | 
|
4474  | 
text_keys.extend([(file_id, revision) for revision in  | 
|
4475  | 
revisions])  | 
|
4476  | 
elif knit_kind == "inventory":  | 
|
4477  | 
                # Now copy the file texts.
 | 
|
4478  | 
from_texts = self.from_repository.texts  | 
|
4479  | 
yield ('texts', from_texts.get_record_stream(  | 
|
4480  | 
text_keys, self.to_format._fetch_order,  | 
|
4481  | 
not self.to_format._fetch_uses_deltas))  | 
|
4482  | 
                # Cause an error if a text occurs after we have done the
 | 
|
4483  | 
                # copy.
 | 
|
4484  | 
text_keys = None  | 
|
4485  | 
                # Before we process the inventory we generate the root
 | 
|
4486  | 
                # texts (if necessary) so that the inventories references
 | 
|
4487  | 
                # will be valid.
 | 
|
4488  | 
for _ in self._generate_root_texts(revs):  | 
|
4489  | 
yield _  | 
|
4490  | 
                # we fetch only the referenced inventories because we do not
 | 
|
4491  | 
                # know for unselected inventories whether all their required
 | 
|
4492  | 
                # texts are present in the other repository - it could be
 | 
|
4493  | 
                # corrupt.
 | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4494  | 
for info in self._get_inventory_stream(revs):  | 
4495  | 
yield info  | 
|
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4496  | 
elif knit_kind == "signatures":  | 
4497  | 
                # Nothing to do here; this will be taken care of when
 | 
|
4498  | 
                # _fetch_revision_texts happens.
 | 
|
4499  | 
                pass
 | 
|
4500  | 
elif knit_kind == "revisions":  | 
|
4501  | 
for record in self._fetch_revision_texts(revs):  | 
|
4502  | 
yield record  | 
|
4503  | 
else:  | 
|
4504  | 
raise AssertionError("Unknown knit kind %r" % knit_kind)  | 
|
4505  | 
||
4506  | 
def get_stream_for_missing_keys(self, missing_keys):  | 
|
4507  | 
        # missing keys can only occur when we are byte copying and not
 | 
|
4508  | 
        # translating (because translation means we don't send
 | 
|
4509  | 
        # unreconstructable deltas ever).
 | 
|
4510  | 
keys = {}  | 
|
4511  | 
keys['texts'] = set()  | 
|
4512  | 
keys['revisions'] = set()  | 
|
4513  | 
keys['inventories'] = set()  | 
|
| 
4343.3.4
by John Arbash Meinel
 Custom implementation for GroupCHKStreamSource.get_stream_for_missing_keys.  | 
4514  | 
keys['chk_bytes'] = set()  | 
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4515  | 
keys['signatures'] = set()  | 
4516  | 
for key in missing_keys:  | 
|
4517  | 
keys[key[0]].add(key[1:])  | 
|
4518  | 
if len(keys['revisions']):  | 
|
4519  | 
            # If we allowed copying revisions at this point, we could end up
 | 
|
4520  | 
            # copying a revision without copying its required texts: a
 | 
|
4521  | 
            # violation of the requirements for repository integrity.
 | 
|
4522  | 
raise AssertionError(  | 
|
4523  | 
'cannot copy revisions to fill in missing deltas %s' % (  | 
|
4524  | 
keys['revisions'],))  | 
|
4525  | 
for substream_kind, keys in keys.iteritems():  | 
|
4526  | 
vf = getattr(self.from_repository, substream_kind)  | 
|
| 
4343.3.10
by John Arbash Meinel
 Add a per_repository_reference test with real stacked repos.  | 
4527  | 
if vf is None and keys:  | 
4528  | 
raise AssertionError(  | 
|
4529  | 
                        "cannot fill in keys for a versioned file we don't"
 | 
|
4530  | 
" have: %s needs %s" % (substream_kind, keys))  | 
|
4531  | 
if not keys:  | 
|
4532  | 
                # No need to stream something we don't have
 | 
|
4533  | 
                continue
 | 
|
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4534  | 
if substream_kind == 'inventories':  | 
| 
4476.3.11
by Andrew Bennetts
 All fetch and interrepo tests passing.  | 
4535  | 
                # Some missing keys are genuinely ghosts, filter those out.
 | 
4536  | 
present = self.from_repository.inventories.get_parent_map(keys)  | 
|
4537  | 
revs = [key[0] for key in present]  | 
|
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4538  | 
                # Get the inventory stream more-or-less as we do for the
 | 
4539  | 
                # original stream; there's no reason to assume that records
 | 
|
4540  | 
                # direct from the source will be suitable for the sink.  (Think
 | 
|
4541  | 
                # e.g. 2a -> 1.9-rich-root).
 | 
|
4542  | 
for info in self._get_inventory_stream(revs, missing=True):  | 
|
4543  | 
yield info  | 
|
4544  | 
                continue
 | 
|
4545  | 
||
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4546  | 
            # Ask for full texts always so that we don't need more round trips
 | 
4547  | 
            # after this stream.
 | 
|
| 
4392.2.2
by John Arbash Meinel
 Add tests that ensure we can fetch branches with ghosts in their ancestry.  | 
4548  | 
            # Some of the missing keys are genuinely ghosts, so filter absent
 | 
4549  | 
            # records. The Sink is responsible for doing another check to
 | 
|
4550  | 
            # ensure that ghosts don't introduce missing data for future
 | 
|
4551  | 
            # fetches.
 | 
|
| 
4392.2.1
by John Arbash Meinel
 quick fix for ghosts and missing keys  | 
4552  | 
stream = versionedfile.filter_absent(vf.get_record_stream(keys,  | 
4553  | 
self.to_format._fetch_order, True))  | 
|
| 
4060.1.3
by Robert Collins
 Implement the separate source component for fetch - repository.StreamSource.  | 
4554  | 
yield substream_kind, stream  | 
4555  | 
||
4556  | 
def inventory_fetch_order(self):  | 
|
4557  | 
if self._rich_root_upgrade():  | 
|
4558  | 
return 'topological'  | 
|
4559  | 
else:  | 
|
4560  | 
return self.to_format._fetch_order  | 
|
4561  | 
||
4562  | 
def _rich_root_upgrade(self):  | 
|
4563  | 
return (not self.from_repository._format.rich_root_data and  | 
|
4564  | 
self.to_format.rich_root_data)  | 
|
4565  | 
||
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4566  | 
def _get_inventory_stream(self, revision_ids, missing=False):  | 
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4567  | 
from_format = self.from_repository._format  | 
| 
4476.3.53
by Andrew Bennetts
 Flush after adding an individual inventory, fixing more tests.  | 
4568  | 
if (from_format.supports_chks and self.to_format.supports_chks and  | 
| 
4476.3.29
by Andrew Bennetts
 Add Repository.get_stream_1.18 verb.  | 
4569  | 
from_format.network_name() == self.to_format.network_name()):  | 
| 
4476.3.16
by Andrew Bennetts
 Only make inv deltas against bases we've already sent, and other tweaks.  | 
4570  | 
raise AssertionError(  | 
4571  | 
"this case should be handled by GroupCHKStreamSource")  | 
|
| 
4476.3.55
by Andrew Bennetts
 Remove irrelevant XXX, reinstate InterDifferingSerializer, add some debug flags.  | 
4572  | 
elif 'forceinvdeltas' in debug.debug_flags:  | 
4573  | 
return self._get_convertable_inventory_stream(revision_ids,  | 
|
4574  | 
delta_versus_null=missing)  | 
|
| 
4476.3.53
by Andrew Bennetts
 Flush after adding an individual inventory, fixing more tests.  | 
4575  | 
elif from_format.network_name() == self.to_format.network_name():  | 
4576  | 
            # Same format.
 | 
|
4577  | 
return self._get_simple_inventory_stream(revision_ids,  | 
|
4578  | 
missing=missing)  | 
|
4579  | 
elif (not from_format.supports_chks and not self.to_format.supports_chks  | 
|
4580  | 
and from_format._serializer == self.to_format._serializer):  | 
|
4581  | 
            # Essentially the same format.
 | 
|
4582  | 
return self._get_simple_inventory_stream(revision_ids,  | 
|
4583  | 
missing=missing)  | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4584  | 
else:  | 
| 
4476.4.1
by John Arbash Meinel
 Change Repository._iter_inventory_xmls to avoid buffering *everything*.  | 
4585  | 
            # Any time we switch serializations, we want to use an
 | 
4586  | 
            # inventory-delta based approach.
 | 
|
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4587  | 
return self._get_convertable_inventory_stream(revision_ids,  | 
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4588  | 
delta_versus_null=missing)  | 
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4589  | 
|
| 
4476.3.11
by Andrew Bennetts
 All fetch and interrepo tests passing.  | 
4590  | 
def _get_simple_inventory_stream(self, revision_ids, missing=False):  | 
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4591  | 
        # NB: This currently reopens the inventory weave in source;
 | 
4592  | 
        # using a single stream interface instead would avoid this.
 | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4593  | 
from_weave = self.from_repository.inventories  | 
| 
4476.3.11
by Andrew Bennetts
 All fetch and interrepo tests passing.  | 
4594  | 
if missing:  | 
4595  | 
delta_closure = True  | 
|
4596  | 
else:  | 
|
4597  | 
delta_closure = not self.delta_on_metadata()  | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4598  | 
yield ('inventories', from_weave.get_record_stream(  | 
4599  | 
[(rev_id,) for rev_id in revision_ids],  | 
|
| 
4476.3.11
by Andrew Bennetts
 All fetch and interrepo tests passing.  | 
4600  | 
self.inventory_fetch_order(), delta_closure))  | 
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4601  | 
|
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4602  | 
def _get_convertable_inventory_stream(self, revision_ids,  | 
4603  | 
delta_versus_null=False):  | 
|
| 
4634.124.3
by Martin Pool
 Give a warning from pulling across the network from a different format  | 
4604  | 
        # The two formats are sufficiently different that there is no fast
 | 
4605  | 
        # path, so we need to send just inventorydeltas, which any
 | 
|
4606  | 
        # sufficiently modern client can insert into any repository.
 | 
|
4607  | 
        # The StreamSink code expects to be able to
 | 
|
| 
4476.3.19
by Andrew Bennetts
 Update comment.  | 
4608  | 
        # convert on the target, so we need to put bytes-on-the-wire that can
 | 
| 
4476.3.82
by Andrew Bennetts
 Mention another bug fix in NEWS, and update verb name, comments, and NEWS additions for landing on 1.19 rather than 1.18.  | 
4609  | 
        # be converted.  That means inventory deltas (if the remote is <1.19,
 | 
| 
4476.3.19
by Andrew Bennetts
 Update comment.  | 
4610  | 
        # RemoteStreamSink will fallback to VFS to insert the deltas).
 | 
| 
4476.3.49
by Andrew Bennetts
 Start reworking inventory-delta streaming to use a separate substream.  | 
4611  | 
yield ('inventory-deltas',  | 
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4612  | 
self._stream_invs_as_deltas(revision_ids,  | 
4613  | 
delta_versus_null=delta_versus_null))  | 
|
4614  | 
||
4615  | 
def _stream_invs_as_deltas(self, revision_ids, delta_versus_null=False):  | 
|
4616  | 
"""Return a stream of inventory-deltas for the given rev ids.  | 
|
4617  | 
||
4618  | 
        :param revision_ids: The list of inventories to transmit
 | 
|
4619  | 
        :param delta_versus_null: Don't try to find a minimal delta for this
 | 
|
4620  | 
            entry, instead compute the delta versus the NULL_REVISION. This
 | 
|
4621  | 
            effectively streams a complete inventory. Used for stuff like
 | 
|
4622  | 
            filling in missing parents, etc.
 | 
|
4623  | 
        """
 | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
4624  | 
from_repo = self.from_repository  | 
4625  | 
revision_keys = [(rev_id,) for rev_id in revision_ids]  | 
|
4626  | 
parent_map = from_repo.inventories.get_parent_map(revision_keys)  | 
|
4627  | 
        # XXX: possibly repos could implement a more efficient iter_inv_deltas
 | 
|
4628  | 
        # method...
 | 
|
4629  | 
inventories = self.from_repository.iter_inventories(  | 
|
4630  | 
revision_ids, 'topological')  | 
|
| 
4476.3.10
by Andrew Bennetts
 Fix streaming of inventory records in get_stream_for_missing_keys, plus other tweaks.  | 
4631  | 
format = from_repo._format  | 
| 
4476.3.16
by Andrew Bennetts
 Only make inv deltas against bases we've already sent, and other tweaks.  | 
4632  | 
invs_sent_so_far = set([_mod_revision.NULL_REVISION])  | 
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4633  | 
inventory_cache = lru_cache.LRUCache(50)  | 
| 
4476.3.47
by Andrew Bennetts
 Fix a test failure.  | 
4634  | 
null_inventory = from_repo.revision_tree(  | 
4635  | 
_mod_revision.NULL_REVISION).inventory  | 
|
| 
4476.3.76
by Andrew Bennetts
 Split out InventoryDeltaDeserializer from InventoryDeltaSerializer.  | 
4636  | 
        # XXX: ideally the rich-root/tree-refs flags would be per-revision, not
 | 
4637  | 
        # per-repo (e.g.  streaming a non-rich-root revision out of a rich-root
 | 
|
4638  | 
        # repo back into a non-rich-root repo ought to be allowed)
 | 
|
4639  | 
serializer = inventory_delta.InventoryDeltaSerializer(  | 
|
4640  | 
versioned_root=format.rich_root_data,  | 
|
4641  | 
tree_references=format.supports_tree_reference)  | 
|
| 
4476.3.1
by Andrew Bennetts
 Initial hacking to use inventory deltas for cross-format fetch.  | 
4642  | 
for inv in inventories:  | 
4643  | 
key = (inv.revision_id,)  | 
|
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4644  | 
parent_keys = parent_map.get(key, ())  | 
4645  | 
delta = None  | 
|
4646  | 
if not delta_versus_null and parent_keys:  | 
|
4647  | 
                # The caller did not ask for complete inventories and we have
 | 
|
4648  | 
                # some parents that we can delta against.  Make a delta against
 | 
|
4649  | 
                # each parent so that we can find the smallest.
 | 
|
4650  | 
parent_ids = [parent_key[0] for parent_key in parent_keys]  | 
|
| 
4476.3.16
by Andrew Bennetts
 Only make inv deltas against bases we've already sent, and other tweaks.  | 
4651  | 
for parent_id in parent_ids:  | 
4652  | 
if parent_id not in invs_sent_so_far:  | 
|
4653  | 
                        # We don't know that the remote side has this basis, so
 | 
|
4654  | 
                        # we can't use it.
 | 
|
4655  | 
                        continue
 | 
|
4656  | 
if parent_id == _mod_revision.NULL_REVISION:  | 
|
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4657  | 
parent_inv = null_inventory  | 
| 
4476.3.16
by Andrew Bennetts
 Only make inv deltas against bases we've already sent, and other tweaks.  | 
4658  | 
else:  | 
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4659  | 
parent_inv = inventory_cache.get(parent_id, None)  | 
4660  | 
if parent_inv is None:  | 
|
4661  | 
parent_inv = from_repo.get_inventory(parent_id)  | 
|
| 
4476.3.13
by Andrew Bennetts
 Various small cleanups, removes some XXXs.  | 
4662  | 
candidate_delta = inv._make_delta(parent_inv)  | 
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4663  | 
if (delta is None or  | 
4664  | 
len(delta) > len(candidate_delta)):  | 
|
4665  | 
delta = candidate_delta  | 
|
| 
4476.3.13
by Andrew Bennetts
 Various small cleanups, removes some XXXs.  | 
4666  | 
basis_id = parent_id  | 
| 
4476.4.2
by John Arbash Meinel
 Rework the _stream_invs_as_deltas code a bit.  | 
4667  | 
if delta is None:  | 
4668  | 
                # Either none of the parents ended up being suitable, or we
 | 
|
4669  | 
                # were asked to delta against NULL
 | 
|
4670  | 
basis_id = _mod_revision.NULL_REVISION  | 
|
4671  | 
delta = inv._make_delta(null_inventory)  | 
|
4672  | 
invs_sent_so_far.add(inv.revision_id)  | 
|
4673  | 
inventory_cache[inv.revision_id] = inv  | 
|
| 
4476.3.49
by Andrew Bennetts
 Start reworking inventory-delta streaming to use a separate substream.  | 
4674  | 
delta_serialized = ''.join(  | 
4675  | 
serializer.delta_to_lines(basis_id, key[-1], delta))  | 
|
4676  | 
yield versionedfile.FulltextContentFactory(  | 
|
4677  | 
key, parent_keys, None, delta_serialized)  | 
|
| 
3735.2.128
by Andrew Bennetts
 Merge bzr.dev, resolving fetch.py conflict.  | 
4678  | 
|
| 
4419.2.3
by Andrew Bennetts
 Refactor _extend_partial_history into a standalone function that can be used without a branch.  | 
4679  | 
|
4680  | 
def _iter_for_revno(repo, partial_history_cache, stop_index=None,  | 
|
4681  | 
stop_revision=None):  | 
|
4682  | 
"""Extend the partial history to include a given index  | 
|
4683  | 
||
4684  | 
    If a stop_index is supplied, stop when that index has been reached.
 | 
|
4685  | 
    If a stop_revision is supplied, stop when that revision is
 | 
|
4686  | 
    encountered.  Otherwise, stop when the beginning of history is
 | 
|
4687  | 
    reached.
 | 
|
4688  | 
||
4689  | 
    :param stop_index: The index which should be present.  When it is
 | 
|
4690  | 
        present, history extension will stop.
 | 
|
4691  | 
    :param stop_revision: The revision id which should be present.  When
 | 
|
4692  | 
        it is encountered, history extension will stop.
 | 
|
4693  | 
    """
 | 
|
4694  | 
start_revision = partial_history_cache[-1]  | 
|
4695  | 
iterator = repo.iter_reverse_revision_history(start_revision)  | 
|
4696  | 
try:  | 
|
4697  | 
        #skip the last revision in the list
 | 
|
| 
4419.2.14
by Andrew Bennetts
 Fix bug when partial_history == [stop_revision]  | 
4698  | 
iterator.next()  | 
| 
4419.2.3
by Andrew Bennetts
 Refactor _extend_partial_history into a standalone function that can be used without a branch.  | 
4699  | 
while True:  | 
4700  | 
if (stop_index is not None and  | 
|
4701  | 
len(partial_history_cache) > stop_index):  | 
|
4702  | 
                break
 | 
|
| 
4419.2.14
by Andrew Bennetts
 Fix bug when partial_history == [stop_revision]  | 
4703  | 
if partial_history_cache[-1] == stop_revision:  | 
4704  | 
                break
 | 
|
| 
4419.2.3
by Andrew Bennetts
 Refactor _extend_partial_history into a standalone function that can be used without a branch.  | 
4705  | 
revision_id = iterator.next()  | 
4706  | 
partial_history_cache.append(revision_id)  | 
|
4707  | 
except StopIteration:  | 
|
4708  | 
        # No more history
 | 
|
4709  | 
        return
 | 
|
4710  |