1
# Copyright (C) 2006, 2007, 2008 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 3", NL;
33
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", WHOLE_NUMBER, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but it's a file. The fingerprint is the
86
sha1 value of the file's canonical form, i.e. after any read filters have
87
been applied to the convenience form stored in the working tree.
88
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
90
't' is a reference to a nested subtree; the fingerprint is the referenced
95
The entries on disk and in memory are ordered according to the following keys:
97
directory, as a list of components
101
--- Format 1 had the following different definition: ---
102
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
103
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
105
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
106
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
109
PARENT ROW's are emitted for every parent that is not in the ghosts details
110
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
111
each row will have a PARENT ROW for foo and baz, but not for bar.
114
In any tree, a kind of 'moved' indicates that the fingerprint field
115
(which we treat as opaque data specific to the 'kind' anyway) has the
116
details for the id of this row in that tree.
118
I'm strongly tempted to add a id->path index as well, but I think that
119
where we need id->path mapping; we also usually read the whole file, so
120
I'm going to skip that for the moment, as we have the ability to locate
121
via bisect any path in any tree, and if we lookup things by path, we can
122
accumulate an id->path mapping as we go, which will tend to match what we
125
I plan to implement this asap, so please speak up now to alter/tweak the
126
design - and once we stabilise on this, I'll update the wiki page for
129
The rationale for all this is that we want fast operations for the
130
common case (diff/status/commit/merge on all files) and extremely fast
131
operations for the less common but still occurs a lot status/diff/commit
132
on specific files). Operations on specific files involve a scan for all
133
the children of a path, *in every involved tree*, which the current
134
format did not accommodate.
138
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
139
2) fall back current object model as needed.
140
3) scale usably to the largest trees known today - say 50K entries. (mozilla
141
is an example of this)
145
Eventually reuse dirstate objects across locks IFF the dirstate file has not
146
been modified, but will require that we flush/ignore cached stat-hit data
147
because we won't want to restat all files on disk just because a lock was
148
acquired, yet we cannot trust the data after the previous lock was released.
150
Memory representation:
151
vector of all directories, and vector of the childen ?
153
root_entrie = (direntry for root, [parent_direntries_for_root]),
155
('', ['data for achild', 'data for bchild', 'data for cchild'])
156
('dir', ['achild', 'cchild', 'echild'])
158
- single bisect to find N subtrees from a path spec
159
- in-order for serialisation - this is 'dirblock' grouping.
160
- insertion of a file '/a' affects only the '/' child-vector, that is, to
161
insert 10K elements from scratch does not generates O(N^2) memoves of a
162
single vector, rather each individual, which tends to be limited to a
163
manageable number. Will scale badly on trees with 10K entries in a
164
single directory. compare with Inventory.InventoryDirectory which has
165
a dictionary for the children. No bisect capability, can only probe for
166
exact matches, or grab all elements and sort.
167
- What's the risk of error here? Once we have the base format being processed
168
we should have a net win regardless of optimality. So we are going to
169
go with what seems reasonable.
172
Maybe we should do a test profile of the core structure - 10K simulated
173
searches/lookups/etc?
175
Objects for each row?
176
The lifetime of Dirstate objects is current per lock, but see above for
177
possible extensions. The lifetime of a row from a dirstate is expected to be
178
very short in the optimistic case: which we are optimising for. For instance,
179
subtree status will determine from analysis of the disk data what rows need to
180
be examined at all, and will be able to determine from a single row whether
181
that file has altered or not, so we are aiming to process tens of thousands of
182
entries each second within the dirstate context, before exposing anything to
183
the larger codebase. This suggests we want the time for a single file
184
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
185
processed, and to scale to 100 thousand we'll another order of magnitude to do
186
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
187
the file on disk, and then immediately discard, the overhead of object creation
188
becomes a significant cost.
190
Figures: Creating a tuple from 3 elements was profiled at 0.0625
191
microseconds, whereas creating a object which is subclassed from tuple was
192
0.500 microseconds, and creating an object with 3 elements and slots was 3
193
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
194
down to 10 microseconds for the total processing - having 33% of that be object
195
creation is a huge overhead. There is a potential cost in using tuples within
196
each row which is that the conditional code to do comparisons may be slower
197
than method invocation, but method invocation is known to be slow due to stack
198
frame creation, so avoiding methods in these tight inner loops in unfortunately
199
desirable. We can consider a pyrex version of this with objects in future if
208
from stat import S_IEXEC
226
# This is the Windows equivalent of ENOTDIR
227
# It is defined in pywin32.winerror, but we don't want a strong dependency for
228
# just an error code.
229
ERROR_PATH_NOT_FOUND = 3
230
ERROR_DIRECTORY = 267
233
if not getattr(struct, '_compile', None):
234
# Cannot pre-compile the dirstate pack_stat
235
def pack_stat(st, _encode=binascii.b2a_base64, _pack=struct.pack):
236
"""Convert stat values into a packed representation."""
237
return _encode(_pack('>LLLLLL', st.st_size, int(st.st_mtime),
238
int(st.st_ctime), st.st_dev, st.st_ino & 0xFFFFFFFF,
241
# compile the struct compiler we need, so as to only do it once
242
from _struct import Struct
243
_compiled_pack = Struct('>LLLLLL').pack
244
def pack_stat(st, _encode=binascii.b2a_base64, _pack=_compiled_pack):
245
"""Convert stat values into a packed representation."""
246
# jam 20060614 it isn't really worth removing more entries if we
247
# are going to leave it in packed form.
248
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
249
# With all entries, filesize is 5.9M and read time is maybe 280ms
250
# well within the noise margin
252
# base64 encoding always adds a final newline, so strip it off
253
# The current version
254
return _encode(_pack(st.st_size, int(st.st_mtime), int(st.st_ctime),
255
st.st_dev, st.st_ino & 0xFFFFFFFF, st.st_mode))[:-1]
256
# This is 0.060s / 1.520s faster by not encoding as much information
257
# return _encode(_pack('>LL', int(st.st_mtime), st.st_mode))[:-1]
258
# This is not strictly faster than _encode(_pack())[:-1]
259
# return '%X.%X.%X.%X.%X.%X' % (
260
# st.st_size, int(st.st_mtime), int(st.st_ctime),
261
# st.st_dev, st.st_ino, st.st_mode)
262
# Similar to the _encode(_pack('>LL'))
263
# return '%X.%X' % (int(st.st_mtime), st.st_mode)
266
class SHA1Provider(object):
267
"""An interface for getting sha1s of a file."""
269
def sha1(self, abspath):
270
"""Return the sha1 of a file given its absolute path.
272
:param abspath: May be a filesystem encoded absolute path
275
raise NotImplementedError(self.sha1)
277
def stat_and_sha1(self, abspath):
278
"""Return the stat and sha1 of a file given its absolute path.
280
:param abspath: May be a filesystem encoded absolute path
283
Note: the stat should be the stat of the physical file
284
while the sha may be the sha of its canonical content.
286
raise NotImplementedError(self.stat_and_sha1)
289
class DefaultSHA1Provider(SHA1Provider):
290
"""A SHA1Provider that reads directly from the filesystem."""
292
def sha1(self, abspath):
293
"""Return the sha1 of a file given its absolute path."""
294
return osutils.sha_file_by_name(abspath)
296
def stat_and_sha1(self, abspath):
297
"""Return the stat and sha1 of a file given its absolute path."""
298
file_obj = file(abspath, 'rb')
300
statvalue = os.fstat(file_obj.fileno())
301
sha1 = osutils.sha_file(file_obj)
304
return statvalue, sha1
307
class DirState(object):
308
"""Record directory and metadata state for fast access.
310
A dirstate is a specialised data structure for managing local working
311
tree state information. Its not yet well defined whether it is platform
312
specific, and if it is how we detect/parameterize that.
314
Dirstates use the usual lock_write, lock_read and unlock mechanisms.
315
Unlike most bzr disk formats, DirStates must be locked for reading, using
316
lock_read. (This is an os file lock internally.) This is necessary
317
because the file can be rewritten in place.
319
DirStates must be explicitly written with save() to commit changes; just
320
unlocking them does not write the changes to disk.
323
_kind_to_minikind = {
329
'tree-reference': 't',
331
_minikind_to_kind = {
337
't': 'tree-reference',
339
_stat_to_minikind = {
344
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
345
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
348
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
349
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
350
# that we are sure a single record will always fit.
351
BISECT_PAGE_SIZE = 4096
354
IN_MEMORY_UNMODIFIED = 1
355
IN_MEMORY_MODIFIED = 2
357
# A pack_stat (the x's) that is just noise and will never match the output
360
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
362
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
363
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
365
def __init__(self, path, sha1_provider):
366
"""Create a DirState object.
368
:param path: The path at which the dirstate file on disk should live.
369
:param sha1_provider: an object meeting the SHA1Provider interface.
371
# _header_state and _dirblock_state represent the current state
372
# of the dirstate metadata and the per-row data respectiely.
373
# NOT_IN_MEMORY indicates that no data is in memory
374
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
375
# is the same as is on disk
376
# IN_MEMORY_MODIFIED indicates that we have a modified version
377
# of what is on disk.
378
# In future we will add more granularity, for instance _dirblock_state
379
# will probably support partially-in-memory as a separate variable,
380
# allowing for partially-in-memory unmodified and partially-in-memory
382
self._header_state = DirState.NOT_IN_MEMORY
383
self._dirblock_state = DirState.NOT_IN_MEMORY
384
# If true, an error has been detected while updating the dirstate, and
385
# for safety we're not going to commit to disk.
386
self._changes_aborted = False
390
self._state_file = None
391
self._filename = path
392
self._lock_token = None
393
self._lock_state = None
394
self._id_index = None
395
# a map from packed_stat to sha's.
396
self._packed_stat_index = None
397
self._end_of_header = None
398
self._cutoff_time = None
399
self._split_path_cache = {}
400
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
401
self._sha1_provider = sha1_provider
402
if 'hashcache' in debug.debug_flags:
403
self._sha1_file = self._sha1_file_and_mutter
405
self._sha1_file = self._sha1_provider.sha1
406
# These two attributes provide a simple cache for lookups into the
407
# dirstate in-memory vectors. By probing respectively for the last
408
# block, and for the next entry, we save nearly 2 bisections per path
410
self._last_block_index = None
411
self._last_entry_index = None
415
(self.__class__.__name__, self._filename)
417
def add(self, path, file_id, kind, stat, fingerprint):
418
"""Add a path to be tracked.
420
:param path: The path within the dirstate - '' is the root, 'foo' is the
421
path foo within the root, 'foo/bar' is the path bar within foo
423
:param file_id: The file id of the path being added.
424
:param kind: The kind of the path, as a string like 'file',
426
:param stat: The output of os.lstat for the path.
427
:param fingerprint: The sha value of the file's canonical form (i.e.
428
after any read filters have been applied),
429
or the target of a symlink,
430
or the referenced revision id for tree-references,
431
or '' for directories.
434
# find the block its in.
435
# find the location in the block.
436
# check its not there
438
#------- copied from inventory.ensure_normalized_name - keep synced.
439
# --- normalized_filename wants a unicode basename only, so get one.
440
dirname, basename = osutils.split(path)
441
# we dont import normalized_filename directly because we want to be
442
# able to change the implementation at runtime for tests.
443
norm_name, can_access = osutils.normalized_filename(basename)
444
if norm_name != basename:
448
raise errors.InvalidNormalization(path)
449
# you should never have files called . or ..; just add the directory
450
# in the parent, or according to the special treatment for the root
451
if basename == '.' or basename == '..':
452
raise errors.InvalidEntryName(path)
453
# now that we've normalised, we need the correct utf8 path and
454
# dirname and basename elements. This single encode and split should be
455
# faster than three separate encodes.
456
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
457
dirname, basename = osutils.split(utf8path)
458
# uses __class__ for speed; the check is needed for safety
459
if file_id.__class__ is not str:
460
raise AssertionError(
461
"must be a utf8 file_id not %s" % (type(file_id), ))
462
# Make sure the file_id does not exist in this tree
464
file_id_entry = self._get_entry(0, fileid_utf8=file_id, include_deleted=True)
465
if file_id_entry != (None, None):
466
if file_id_entry[1][0][0] == 'a':
467
if file_id_entry[0] != (dirname, basename, file_id):
468
# set the old name's current operation to rename
469
self.update_minimal(file_id_entry[0],
475
rename_from = file_id_entry[0][0:2]
477
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
478
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
479
info = '%s:%s' % (kind, path)
480
raise errors.DuplicateFileId(file_id, info)
481
first_key = (dirname, basename, '')
482
block_index, present = self._find_block_index_from_key(first_key)
484
# check the path is not in the tree
485
block = self._dirblocks[block_index][1]
486
entry_index, _ = self._find_entry_index(first_key, block)
487
while (entry_index < len(block) and
488
block[entry_index][0][0:2] == first_key[0:2]):
489
if block[entry_index][1][0][0] not in 'ar':
490
# this path is in the dirstate in the current tree.
491
raise Exception, "adding already added path!"
494
# The block where we want to put the file is not present. But it
495
# might be because the directory was empty, or not loaded yet. Look
496
# for a parent entry, if not found, raise NotVersionedError
497
parent_dir, parent_base = osutils.split(dirname)
498
parent_block_idx, parent_entry_idx, _, parent_present = \
499
self._get_block_entry_index(parent_dir, parent_base, 0)
500
if not parent_present:
501
raise errors.NotVersionedError(path, str(self))
502
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
503
block = self._dirblocks[block_index][1]
504
entry_key = (dirname, basename, file_id)
507
packed_stat = DirState.NULLSTAT
510
packed_stat = pack_stat(stat)
511
parent_info = self._empty_parent_info()
512
minikind = DirState._kind_to_minikind[kind]
513
if rename_from is not None:
515
old_path_utf8 = '%s/%s' % rename_from
517
old_path_utf8 = rename_from[1]
518
parent_info[0] = ('r', old_path_utf8, 0, False, '')
520
entry_data = entry_key, [
521
(minikind, fingerprint, size, False, packed_stat),
523
elif kind == 'directory':
524
entry_data = entry_key, [
525
(minikind, '', 0, False, packed_stat),
527
elif kind == 'symlink':
528
entry_data = entry_key, [
529
(minikind, fingerprint, size, False, packed_stat),
531
elif kind == 'tree-reference':
532
entry_data = entry_key, [
533
(minikind, fingerprint, 0, False, packed_stat),
536
raise errors.BzrError('unknown kind %r' % kind)
537
entry_index, present = self._find_entry_index(entry_key, block)
539
block.insert(entry_index, entry_data)
541
if block[entry_index][1][0][0] != 'a':
542
raise AssertionError(" %r(%r) already added" % (basename, file_id))
543
block[entry_index][1][0] = entry_data[1][0]
545
if kind == 'directory':
546
# insert a new dirblock
547
self._ensure_block(block_index, entry_index, utf8path)
548
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
550
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
552
def _bisect(self, paths):
553
"""Bisect through the disk structure for specific rows.
555
:param paths: A list of paths to find
556
:return: A dict mapping path => entries for found entries. Missing
557
entries will not be in the map.
558
The list is not sorted, and entries will be populated
559
based on when they were read.
561
self._requires_lock()
562
# We need the file pointer to be right after the initial header block
563
self._read_header_if_needed()
564
# If _dirblock_state was in memory, we should just return info from
565
# there, this function is only meant to handle when we want to read
567
if self._dirblock_state != DirState.NOT_IN_MEMORY:
568
raise AssertionError("bad dirblock state %r" % self._dirblock_state)
570
# The disk representation is generally info + '\0\n\0' at the end. But
571
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
572
# Because it means we can sync on the '\n'
573
state_file = self._state_file
574
file_size = os.fstat(state_file.fileno()).st_size
575
# We end up with 2 extra fields, we should have a trailing '\n' to
576
# ensure that we read the whole record, and we should have a precursur
577
# '' which ensures that we start after the previous '\n'
578
entry_field_count = self._fields_per_entry() + 1
580
low = self._end_of_header
581
high = file_size - 1 # Ignore the final '\0'
582
# Map from (dir, name) => entry
585
# Avoid infinite seeking
586
max_count = 30*len(paths)
588
# pending is a list of places to look.
589
# each entry is a tuple of low, high, dir_names
590
# low -> the first byte offset to read (inclusive)
591
# high -> the last byte offset (inclusive)
592
# dir_names -> The list of (dir, name) pairs that should be found in
593
# the [low, high] range
594
pending = [(low, high, paths)]
596
page_size = self._bisect_page_size
598
fields_to_entry = self._get_fields_to_entry()
601
low, high, cur_files = pending.pop()
603
if not cur_files or low >= high:
608
if count > max_count:
609
raise errors.BzrError('Too many seeks, most likely a bug.')
611
mid = max(low, (low+high-page_size)/2)
614
# limit the read size, so we don't end up reading data that we have
616
read_size = min(page_size, (high-mid)+1)
617
block = state_file.read(read_size)
620
entries = block.split('\n')
623
# We didn't find a '\n', so we cannot have found any records.
624
# So put this range back and try again. But we know we have to
625
# increase the page size, because a single read did not contain
626
# a record break (so records must be larger than page_size)
628
pending.append((low, high, cur_files))
631
# Check the first and last entries, in case they are partial, or if
632
# we don't care about the rest of this page
634
first_fields = entries[0].split('\0')
635
if len(first_fields) < entry_field_count:
636
# We didn't get the complete first entry
637
# so move start, and grab the next, which
638
# should be a full entry
639
start += len(entries[0])+1
640
first_fields = entries[1].split('\0')
643
if len(first_fields) <= 2:
644
# We didn't even get a filename here... what do we do?
645
# Try a large page size and repeat this query
647
pending.append((low, high, cur_files))
650
# Find what entries we are looking for, which occur before and
651
# after this first record.
654
first_path = first_fields[1] + '/' + first_fields[2]
656
first_path = first_fields[2]
657
first_loc = _bisect_path_left(cur_files, first_path)
659
# These exist before the current location
660
pre = cur_files[:first_loc]
661
# These occur after the current location, which may be in the
662
# data we read, or might be after the last entry
663
post = cur_files[first_loc:]
665
if post and len(first_fields) >= entry_field_count:
666
# We have files after the first entry
668
# Parse the last entry
669
last_entry_num = len(entries)-1
670
last_fields = entries[last_entry_num].split('\0')
671
if len(last_fields) < entry_field_count:
672
# The very last hunk was not complete,
673
# read the previous hunk
674
after = mid + len(block) - len(entries[-1])
676
last_fields = entries[last_entry_num].split('\0')
678
after = mid + len(block)
681
last_path = last_fields[1] + '/' + last_fields[2]
683
last_path = last_fields[2]
684
last_loc = _bisect_path_right(post, last_path)
686
middle_files = post[:last_loc]
687
post = post[last_loc:]
690
# We have files that should occur in this block
691
# (>= first, <= last)
692
# Either we will find them here, or we can mark them as
695
if middle_files[0] == first_path:
696
# We might need to go before this location
697
pre.append(first_path)
698
if middle_files[-1] == last_path:
699
post.insert(0, last_path)
701
# Find out what paths we have
702
paths = {first_path:[first_fields]}
703
# last_path might == first_path so we need to be
704
# careful if we should append rather than overwrite
705
if last_entry_num != first_entry_num:
706
paths.setdefault(last_path, []).append(last_fields)
707
for num in xrange(first_entry_num+1, last_entry_num):
708
# TODO: jam 20070223 We are already splitting here, so
709
# shouldn't we just split the whole thing rather
710
# than doing the split again in add_one_record?
711
fields = entries[num].split('\0')
713
path = fields[1] + '/' + fields[2]
716
paths.setdefault(path, []).append(fields)
718
for path in middle_files:
719
for fields in paths.get(path, []):
720
# offset by 1 because of the opening '\0'
721
# consider changing fields_to_entry to avoid the
723
entry = fields_to_entry(fields[1:])
724
found.setdefault(path, []).append(entry)
726
# Now we have split up everything into pre, middle, and post, and
727
# we have handled everything that fell in 'middle'.
728
# We add 'post' first, so that we prefer to seek towards the
729
# beginning, so that we will tend to go as early as we need, and
730
# then only seek forward after that.
732
pending.append((after, high, post))
734
pending.append((low, start-1, pre))
736
# Consider that we may want to return the directory entries in sorted
737
# order. For now, we just return them in whatever order we found them,
738
# and leave it up to the caller if they care if it is ordered or not.
741
def _bisect_dirblocks(self, dir_list):
742
"""Bisect through the disk structure to find entries in given dirs.
744
_bisect_dirblocks is meant to find the contents of directories, which
745
differs from _bisect, which only finds individual entries.
747
:param dir_list: A sorted list of directory names ['', 'dir', 'foo'].
748
:return: A map from dir => entries_for_dir
750
# TODO: jam 20070223 A lot of the bisecting logic could be shared
751
# between this and _bisect. It would require parameterizing the
752
# inner loop with a function, though. We should evaluate the
753
# performance difference.
754
self._requires_lock()
755
# We need the file pointer to be right after the initial header block
756
self._read_header_if_needed()
757
# If _dirblock_state was in memory, we should just return info from
758
# there, this function is only meant to handle when we want to read
760
if self._dirblock_state != DirState.NOT_IN_MEMORY:
761
raise AssertionError("bad dirblock state %r" % self._dirblock_state)
762
# The disk representation is generally info + '\0\n\0' at the end. But
763
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
764
# Because it means we can sync on the '\n'
765
state_file = self._state_file
766
file_size = os.fstat(state_file.fileno()).st_size
767
# We end up with 2 extra fields, we should have a trailing '\n' to
768
# ensure that we read the whole record, and we should have a precursur
769
# '' which ensures that we start after the previous '\n'
770
entry_field_count = self._fields_per_entry() + 1
772
low = self._end_of_header
773
high = file_size - 1 # Ignore the final '\0'
774
# Map from dir => entry
777
# Avoid infinite seeking
778
max_count = 30*len(dir_list)
780
# pending is a list of places to look.
781
# each entry is a tuple of low, high, dir_names
782
# low -> the first byte offset to read (inclusive)
783
# high -> the last byte offset (inclusive)
784
# dirs -> The list of directories that should be found in
785
# the [low, high] range
786
pending = [(low, high, dir_list)]
788
page_size = self._bisect_page_size
790
fields_to_entry = self._get_fields_to_entry()
793
low, high, cur_dirs = pending.pop()
795
if not cur_dirs or low >= high:
800
if count > max_count:
801
raise errors.BzrError('Too many seeks, most likely a bug.')
803
mid = max(low, (low+high-page_size)/2)
806
# limit the read size, so we don't end up reading data that we have
808
read_size = min(page_size, (high-mid)+1)
809
block = state_file.read(read_size)
812
entries = block.split('\n')
815
# We didn't find a '\n', so we cannot have found any records.
816
# So put this range back and try again. But we know we have to
817
# increase the page size, because a single read did not contain
818
# a record break (so records must be larger than page_size)
820
pending.append((low, high, cur_dirs))
823
# Check the first and last entries, in case they are partial, or if
824
# we don't care about the rest of this page
826
first_fields = entries[0].split('\0')
827
if len(first_fields) < entry_field_count:
828
# We didn't get the complete first entry
829
# so move start, and grab the next, which
830
# should be a full entry
831
start += len(entries[0])+1
832
first_fields = entries[1].split('\0')
835
if len(first_fields) <= 1:
836
# We didn't even get a dirname here... what do we do?
837
# Try a large page size and repeat this query
839
pending.append((low, high, cur_dirs))
842
# Find what entries we are looking for, which occur before and
843
# after this first record.
845
first_dir = first_fields[1]
846
first_loc = bisect.bisect_left(cur_dirs, first_dir)
848
# These exist before the current location
849
pre = cur_dirs[:first_loc]
850
# These occur after the current location, which may be in the
851
# data we read, or might be after the last entry
852
post = cur_dirs[first_loc:]
854
if post and len(first_fields) >= entry_field_count:
855
# We have records to look at after the first entry
857
# Parse the last entry
858
last_entry_num = len(entries)-1
859
last_fields = entries[last_entry_num].split('\0')
860
if len(last_fields) < entry_field_count:
861
# The very last hunk was not complete,
862
# read the previous hunk
863
after = mid + len(block) - len(entries[-1])
865
last_fields = entries[last_entry_num].split('\0')
867
after = mid + len(block)
869
last_dir = last_fields[1]
870
last_loc = bisect.bisect_right(post, last_dir)
872
middle_files = post[:last_loc]
873
post = post[last_loc:]
876
# We have files that should occur in this block
877
# (>= first, <= last)
878
# Either we will find them here, or we can mark them as
881
if middle_files[0] == first_dir:
882
# We might need to go before this location
883
pre.append(first_dir)
884
if middle_files[-1] == last_dir:
885
post.insert(0, last_dir)
887
# Find out what paths we have
888
paths = {first_dir:[first_fields]}
889
# last_dir might == first_dir so we need to be
890
# careful if we should append rather than overwrite
891
if last_entry_num != first_entry_num:
892
paths.setdefault(last_dir, []).append(last_fields)
893
for num in xrange(first_entry_num+1, last_entry_num):
894
# TODO: jam 20070223 We are already splitting here, so
895
# shouldn't we just split the whole thing rather
896
# than doing the split again in add_one_record?
897
fields = entries[num].split('\0')
898
paths.setdefault(fields[1], []).append(fields)
900
for cur_dir in middle_files:
901
for fields in paths.get(cur_dir, []):
902
# offset by 1 because of the opening '\0'
903
# consider changing fields_to_entry to avoid the
905
entry = fields_to_entry(fields[1:])
906
found.setdefault(cur_dir, []).append(entry)
908
# Now we have split up everything into pre, middle, and post, and
909
# we have handled everything that fell in 'middle'.
910
# We add 'post' first, so that we prefer to seek towards the
911
# beginning, so that we will tend to go as early as we need, and
912
# then only seek forward after that.
914
pending.append((after, high, post))
916
pending.append((low, start-1, pre))
920
def _bisect_recursive(self, paths):
921
"""Bisect for entries for all paths and their children.
923
This will use bisect to find all records for the supplied paths. It
924
will then continue to bisect for any records which are marked as
925
directories. (and renames?)
927
:param paths: A sorted list of (dir, name) pairs
928
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
929
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
931
# Map from (dir, name, file_id) => [tree_info]
934
found_dir_names = set()
936
# Directories that have been read
937
processed_dirs = set()
938
# Get the ball rolling with the first bisect for all entries.
939
newly_found = self._bisect(paths)
942
# Directories that need to be read
944
paths_to_search = set()
945
for entry_list in newly_found.itervalues():
946
for dir_name_id, trees_info in entry_list:
947
found[dir_name_id] = trees_info
948
found_dir_names.add(dir_name_id[:2])
950
for tree_info in trees_info:
951
minikind = tree_info[0]
954
# We already processed this one as a directory,
955
# we don't need to do the extra work again.
957
subdir, name, file_id = dir_name_id
958
path = osutils.pathjoin(subdir, name)
960
if path not in processed_dirs:
961
pending_dirs.add(path)
962
elif minikind == 'r':
963
# Rename, we need to directly search the target
964
# which is contained in the fingerprint column
965
dir_name = osutils.split(tree_info[1])
966
if dir_name[0] in pending_dirs:
967
# This entry will be found in the dir search
969
if dir_name not in found_dir_names:
970
paths_to_search.add(tree_info[1])
971
# Now we have a list of paths to look for directly, and
972
# directory blocks that need to be read.
973
# newly_found is mixing the keys between (dir, name) and path
974
# entries, but that is okay, because we only really care about the
976
newly_found = self._bisect(sorted(paths_to_search))
977
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
978
processed_dirs.update(pending_dirs)
981
def _discard_merge_parents(self):
982
"""Discard any parents trees beyond the first.
984
Note that if this fails the dirstate is corrupted.
986
After this function returns the dirstate contains 2 trees, neither of
989
self._read_header_if_needed()
990
parents = self.get_parent_ids()
993
# only require all dirblocks if we are doing a full-pass removal.
994
self._read_dirblocks_if_needed()
995
dead_patterns = set([('a', 'r'), ('a', 'a'), ('r', 'r'), ('r', 'a')])
996
def iter_entries_removable():
997
for block in self._dirblocks:
998
deleted_positions = []
999
for pos, entry in enumerate(block[1]):
1001
if (entry[1][0][0], entry[1][1][0]) in dead_patterns:
1002
deleted_positions.append(pos)
1003
if deleted_positions:
1004
if len(deleted_positions) == len(block[1]):
1007
for pos in reversed(deleted_positions):
1009
# if the first parent is a ghost:
1010
if parents[0] in self.get_ghosts():
1011
empty_parent = [DirState.NULL_PARENT_DETAILS]
1012
for entry in iter_entries_removable():
1013
entry[1][1:] = empty_parent
1015
for entry in iter_entries_removable():
1019
self._parents = [parents[0]]
1020
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1021
self._header_state = DirState.IN_MEMORY_MODIFIED
1023
def _empty_parent_info(self):
1024
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
1027
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
1028
"""Ensure a block for dirname exists.
1030
This function exists to let callers which know that there is a
1031
directory dirname ensure that the block for it exists. This block can
1032
fail to exist because of demand loading, or because a directory had no
1033
children. In either case it is not an error. It is however an error to
1034
call this if there is no parent entry for the directory, and thus the
1035
function requires the coordinates of such an entry to be provided.
1037
The root row is special cased and can be indicated with a parent block
1040
:param parent_block_index: The index of the block in which dirname's row
1042
:param parent_row_index: The index in the parent block where the row
1044
:param dirname: The utf8 dirname to ensure there is a block for.
1045
:return: The index for the block.
1047
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
1048
# This is the signature of the root row, and the
1049
# contents-of-root row is always index 1
1051
# the basename of the directory must be the end of its full name.
1052
if not (parent_block_index == -1 and
1053
parent_block_index == -1 and dirname == ''):
1054
if not dirname.endswith(
1055
self._dirblocks[parent_block_index][1][parent_row_index][0][1]):
1056
raise AssertionError("bad dirname %r" % dirname)
1057
block_index, present = self._find_block_index_from_key((dirname, '', ''))
1059
## In future, when doing partial parsing, this should load and
1060
# populate the entire block.
1061
self._dirblocks.insert(block_index, (dirname, []))
1064
def _entries_to_current_state(self, new_entries):
1065
"""Load new_entries into self.dirblocks.
1067
Process new_entries into the current state object, making them the active
1068
state. The entries are grouped together by directory to form dirblocks.
1070
:param new_entries: A sorted list of entries. This function does not sort
1071
to prevent unneeded overhead when callers have a sorted list already.
1074
if new_entries[0][0][0:2] != ('', ''):
1075
raise AssertionError(
1076
"Missing root row %r" % (new_entries[0][0],))
1077
# The two blocks here are deliberate: the root block and the
1078
# contents-of-root block.
1079
self._dirblocks = [('', []), ('', [])]
1080
current_block = self._dirblocks[0][1]
1081
current_dirname = ''
1083
append_entry = current_block.append
1084
for entry in new_entries:
1085
if entry[0][0] != current_dirname:
1086
# new block - different dirname
1088
current_dirname = entry[0][0]
1089
self._dirblocks.append((current_dirname, current_block))
1090
append_entry = current_block.append
1091
# append the entry to the current block
1093
self._split_root_dirblock_into_contents()
1095
def _split_root_dirblock_into_contents(self):
1096
"""Split the root dirblocks into root and contents-of-root.
1098
After parsing by path, we end up with root entries and contents-of-root
1099
entries in the same block. This loop splits them out again.
1101
# The above loop leaves the "root block" entries mixed with the
1102
# "contents-of-root block". But we don't want an if check on
1103
# all entries, so instead we just fix it up here.
1104
if self._dirblocks[1] != ('', []):
1105
raise ValueError("bad dirblock start %r" % (self._dirblocks[1],))
1107
contents_of_root_block = []
1108
for entry in self._dirblocks[0][1]:
1109
if not entry[0][1]: # This is a root entry
1110
root_block.append(entry)
1112
contents_of_root_block.append(entry)
1113
self._dirblocks[0] = ('', root_block)
1114
self._dirblocks[1] = ('', contents_of_root_block)
1116
def _entries_for_path(self, path):
1117
"""Return a list with all the entries that match path for all ids."""
1118
dirname, basename = os.path.split(path)
1119
key = (dirname, basename, '')
1120
block_index, present = self._find_block_index_from_key(key)
1122
# the block which should contain path is absent.
1125
block = self._dirblocks[block_index][1]
1126
entry_index, _ = self._find_entry_index(key, block)
1127
# we may need to look at multiple entries at this path: walk while the specific_files match.
1128
while (entry_index < len(block) and
1129
block[entry_index][0][0:2] == key[0:2]):
1130
result.append(block[entry_index])
1134
def _entry_to_line(self, entry):
1135
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
1137
:param entry: An entry_tuple as defined in the module docstring.
1139
entire_entry = list(entry[0])
1140
for tree_number, tree_data in enumerate(entry[1]):
1141
# (minikind, fingerprint, size, executable, tree_specific_string)
1142
entire_entry.extend(tree_data)
1143
# 3 for the key, 5 for the fields per tree.
1144
tree_offset = 3 + tree_number * 5
1146
entire_entry[tree_offset + 0] = tree_data[0]
1148
entire_entry[tree_offset + 2] = str(tree_data[2])
1150
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
1151
return '\0'.join(entire_entry)
1153
def _fields_per_entry(self):
1154
"""How many null separated fields should be in each entry row.
1156
Each line now has an extra '\n' field which is not used
1157
so we just skip over it
1159
3 fields for the key
1160
+ number of fields per tree_data (5) * tree count
1163
tree_count = 1 + self._num_present_parents()
1164
return 3 + 5 * tree_count + 1
1166
def _find_block(self, key, add_if_missing=False):
1167
"""Return the block that key should be present in.
1169
:param key: A dirstate entry key.
1170
:return: The block tuple.
1172
block_index, present = self._find_block_index_from_key(key)
1174
if not add_if_missing:
1175
# check to see if key is versioned itself - we might want to
1176
# add it anyway, because dirs with no entries dont get a
1177
# dirblock at parse time.
1178
# This is an uncommon branch to take: most dirs have children,
1179
# and most code works with versioned paths.
1180
parent_base, parent_name = osutils.split(key[0])
1181
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
1182
# some parent path has not been added - its an error to add
1184
raise errors.NotVersionedError(key[0:2], str(self))
1185
self._dirblocks.insert(block_index, (key[0], []))
1186
return self._dirblocks[block_index]
1188
def _find_block_index_from_key(self, key):
1189
"""Find the dirblock index for a key.
1191
:return: The block index, True if the block for the key is present.
1193
if key[0:2] == ('', ''):
1196
if (self._last_block_index is not None and
1197
self._dirblocks[self._last_block_index][0] == key[0]):
1198
return self._last_block_index, True
1201
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
1202
cache=self._split_path_cache)
1203
# _right returns one-past-where-key is so we have to subtract
1204
# one to use it. we use _right here because there are two
1205
# '' blocks - the root, and the contents of root
1206
# we always have a minimum of 2 in self._dirblocks: root and
1207
# root-contents, and for '', we get 2 back, so this is
1208
# simple and correct:
1209
present = (block_index < len(self._dirblocks) and
1210
self._dirblocks[block_index][0] == key[0])
1211
self._last_block_index = block_index
1212
# Reset the entry index cache to the beginning of the block.
1213
self._last_entry_index = -1
1214
return block_index, present
1216
def _find_entry_index(self, key, block):
1217
"""Find the entry index for a key in a block.
1219
:return: The entry index, True if the entry for the key is present.
1221
len_block = len(block)
1223
if self._last_entry_index is not None:
1225
entry_index = self._last_entry_index + 1
1226
# A hit is when the key is after the last slot, and before or
1227
# equal to the next slot.
1228
if ((entry_index > 0 and block[entry_index - 1][0] < key) and
1229
key <= block[entry_index][0]):
1230
self._last_entry_index = entry_index
1231
present = (block[entry_index][0] == key)
1232
return entry_index, present
1235
entry_index = bisect.bisect_left(block, (key, []))
1236
present = (entry_index < len_block and
1237
block[entry_index][0] == key)
1238
self._last_entry_index = entry_index
1239
return entry_index, present
1242
def from_tree(tree, dir_state_filename, sha1_provider=None):
1243
"""Create a dirstate from a bzr Tree.
1245
:param tree: The tree which should provide parent information and
1247
:param sha1_provider: an object meeting the SHA1Provider interface.
1248
If None, a DefaultSHA1Provider is used.
1249
:return: a DirState object which is currently locked for writing.
1250
(it was locked by DirState.initialize)
1252
result = DirState.initialize(dir_state_filename,
1253
sha1_provider=sha1_provider)
1257
parent_ids = tree.get_parent_ids()
1258
num_parents = len(parent_ids)
1260
for parent_id in parent_ids:
1261
parent_tree = tree.branch.repository.revision_tree(parent_id)
1262
parent_trees.append((parent_id, parent_tree))
1263
parent_tree.lock_read()
1264
result.set_parent_trees(parent_trees, [])
1265
result.set_state_from_inventory(tree.inventory)
1267
for revid, parent_tree in parent_trees:
1268
parent_tree.unlock()
1271
# The caller won't have a chance to unlock this, so make sure we
1277
def update_by_delta(self, delta):
1278
"""Apply an inventory delta to the dirstate for tree 0
1280
:param delta: An inventory delta. See Inventory.apply_delta for
1283
self._read_dirblocks_if_needed()
1286
for old_path, new_path, file_id, inv_entry in sorted(delta, reverse=True):
1287
if (file_id in insertions) or (file_id in removals):
1288
raise errors.InconsistentDelta(old_path or new_path, file_id,
1290
if old_path is not None:
1291
old_path = old_path.encode('utf-8')
1292
removals[file_id] = old_path
1293
if new_path is not None:
1294
new_path = new_path.encode('utf-8')
1295
dirname, basename = osutils.split(new_path)
1296
key = (dirname, basename, file_id)
1297
minikind = DirState._kind_to_minikind[inv_entry.kind]
1299
fingerprint = inv_entry.reference_revision
1302
insertions[file_id] = (key, minikind, inv_entry.executable,
1303
fingerprint, new_path)
1304
# Transform moves into delete+add pairs
1305
if None not in (old_path, new_path):
1306
for child in self._iter_child_entries(0, old_path):
1307
if child[0][2] in insertions or child[0][2] in removals:
1309
child_dirname = child[0][0]
1310
child_basename = child[0][1]
1311
minikind = child[1][0][0]
1312
fingerprint = child[1][0][4]
1313
executable = child[1][0][3]
1314
old_child_path = osutils.pathjoin(child[0][0],
1316
removals[child[0][2]] = old_child_path
1317
child_suffix = child_dirname[len(old_path):]
1318
new_child_dirname = (new_path + child_suffix)
1319
key = (new_child_dirname, child_basename, child[0][2])
1320
new_child_path = os.path.join(new_child_dirname,
1322
insertions[child[0][2]] = (key, minikind, executable,
1323
fingerprint, new_child_path)
1324
self._apply_removals(removals.values())
1325
self._apply_insertions(insertions.values())
1327
def _apply_removals(self, removals):
1328
for path in sorted(removals, reverse=True):
1329
dirname, basename = osutils.split(path)
1330
block_i, entry_i, d_present, f_present = \
1331
self._get_block_entry_index(dirname, basename, 0)
1332
entry = self._dirblocks[block_i][1][entry_i]
1333
self._make_absent(entry)
1334
# See if we have a malformed delta: deleting a directory must not
1335
# leave crud behind. This increases the number of bisects needed
1336
# substantially, but deletion or renames of large numbers of paths
1337
# is rare enough it shouldn't be an issue (famous last words?) RBC
1339
block_i, entry_i, d_present, f_present = \
1340
self._get_block_entry_index(path, '', 0)
1342
# The dir block is still present in the dirstate; this could
1343
# be due to it being in a parent tree, or a corrupt delta.
1344
for child_entry in self._dirblocks[block_i][1]:
1345
if child_entry[1][0][0] not in ('r', 'a'):
1346
raise errors.InconsistentDelta(path, entry[0][2],
1347
"The file id was deleted but its children were "
1350
def _apply_insertions(self, adds):
1351
for key, minikind, executable, fingerprint, path_utf8 in sorted(adds):
1352
self.update_minimal(key, minikind, executable, fingerprint,
1353
path_utf8=path_utf8)
1355
def update_basis_by_delta(self, delta, new_revid):
1356
"""Update the parents of this tree after a commit.
1358
This gives the tree one parent, with revision id new_revid. The
1359
inventory delta is applied to the current basis tree to generate the
1360
inventory for the parent new_revid, and all other parent trees are
1363
Note that an exception during the operation of this method will leave
1364
the dirstate in a corrupt state where it should not be saved.
1366
Finally, we expect all changes to be synchronising the basis tree with
1369
:param new_revid: The new revision id for the trees parent.
1370
:param delta: An inventory delta (see apply_inventory_delta) describing
1371
the changes from the current left most parent revision to new_revid.
1373
self._read_dirblocks_if_needed()
1374
self._discard_merge_parents()
1375
if self._ghosts != []:
1376
raise NotImplementedError(self.update_basis_by_delta)
1377
if len(self._parents) == 0:
1378
# setup a blank tree, the most simple way.
1379
empty_parent = DirState.NULL_PARENT_DETAILS
1380
for entry in self._iter_entries():
1381
entry[1].append(empty_parent)
1382
self._parents.append(new_revid)
1384
self._parents[0] = new_revid
1386
delta = sorted(delta, reverse=True)
1390
# The paths this function accepts are unicode and must be encoded as we
1392
encode = cache_utf8.encode
1393
inv_to_entry = self._inv_entry_to_details
1394
# delta is now (deletes, changes), (adds) in reverse lexographical
1396
# deletes in reverse lexographic order are safe to process in situ.
1397
# renames are not, as a rename from any path could go to a path
1398
# lexographically lower, so we transform renames into delete, add pairs,
1399
# expanding them recursively as needed.
1400
# At the same time, to reduce interface friction we convert the input
1401
# inventory entries to dirstate.
1402
root_only = ('', '')
1403
# Accumulate parent references (path and id), to check for parentless
1404
# items or items placed under files/links/tree-references.
1406
for old_path, new_path, file_id, inv_entry in delta:
1407
if inv_entry is not None and file_id != inv_entry.file_id:
1408
raise errors.InconsistentDelta(new_path, file_id,
1409
"mismatched entry file_id %r" % inv_entry)
1410
if old_path is None:
1411
adds.append((None, encode(new_path), file_id,
1412
inv_to_entry(inv_entry), True))
1413
# note the parent for validation
1414
dirname, basename = osutils.split(new_path)
1415
parents.add((dirname, inv_entry.parent_id))
1416
elif new_path is None:
1417
deletes.append((encode(old_path), None, file_id, None, True))
1418
elif (old_path, new_path) != root_only:
1420
# Because renames must preserve their children we must have
1421
# processed all relocations and removes before hand. The sort
1422
# order ensures we've examined the child paths, but we also
1423
# have to execute the removals, or the split to an add/delete
1424
# pair will result in the deleted item being reinserted, or
1425
# renamed items being reinserted twice - and possibly at the
1426
# wrong place. Splitting into a delete/add pair also simplifies
1427
# the handling of entries with ('f', ...), ('r' ...) because
1428
# the target of the 'r' is old_path here, and we add that to
1429
# deletes, meaning that the add handler does not need to check
1430
# for 'r' items on every pass.
1431
self._update_basis_apply_deletes(deletes)
1433
new_path_utf8 = encode(new_path)
1434
# Split into an add/delete pair recursively.
1435
adds.append((None, new_path_utf8, file_id,
1436
inv_to_entry(inv_entry), False))
1437
# Expunge deletes that we've seen so that deleted/renamed
1438
# children of a rename directory are handled correctly.
1439
new_deletes = reversed(list(self._iter_child_entries(1,
1441
# Remove the current contents of the tree at orig_path, and
1442
# reinsert at the correct new path.
1443
for entry in new_deletes:
1445
source_path = entry[0][0] + '/' + entry[0][1]
1447
source_path = entry[0][1]
1449
target_path = new_path_utf8 + source_path[len(old_path):]
1452
raise AssertionError("cannot rename directory to"
1454
target_path = source_path[len(old_path) + 1:]
1455
adds.append((None, target_path, entry[0][2], entry[1][1], False))
1457
(source_path, target_path, entry[0][2], None, False))
1459
(encode(old_path), new_path, file_id, None, False))
1460
# note the parent for validation
1461
dirname, basename = osutils.split(new_path)
1462
parents.add((dirname, inv_entry.parent_id))
1464
# changes to just the root should not require remove/insertion
1466
changes.append((encode(old_path), encode(new_path), file_id,
1467
inv_to_entry(inv_entry)))
1470
# Finish expunging deletes/first half of renames.
1471
self._update_basis_apply_deletes(deletes)
1472
# Reinstate second half of renames and new paths.
1473
self._update_basis_apply_adds(adds)
1474
# Apply in-situ changes.
1475
self._update_basis_apply_changes(changes)
1477
self._update_basis_check_parents(parents)
1478
except errors.BzrError, e:
1479
if 'integrity error' not in str(e):
1481
# _get_entry raises BzrError when a request is inconsistent; we
1482
# want such errors to be shown as InconsistentDelta - and that
1483
# fits the behaviour we trigger. Partof this is driven by dirstate
1484
# only supporting deltas that turn the basis into a closer fit to
1486
self._changes_aborted = True
1487
raise errors.InconsistentDeltaDelta(delta, "error from _get_entry.")
1489
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1490
self._header_state = DirState.IN_MEMORY_MODIFIED
1491
self._id_index = None
1494
def _update_basis_apply_adds(self, adds):
1495
"""Apply a sequence of adds to tree 1 during update_basis_by_delta.
1497
They may be adds, or renames that have been split into add/delete
1500
:param adds: A sequence of adds. Each add is a tuple:
1501
(None, new_path_utf8, file_id, (entry_details), real_add). real_add
1502
is False when the add is the second half of a remove-and-reinsert
1503
pair created to handle renames and deletes.
1505
# Adds are accumulated partly from renames, so can be in any input
1508
# adds is now in lexographic order, which places all parents before
1509
# their children, so we can process it linearly.
1511
for old_path, new_path, file_id, new_details, real_add in adds:
1512
# the entry for this file_id must be in tree 0.
1513
entry = self._get_entry(0, file_id, new_path)
1514
if entry[0] is None or entry[0][2] != file_id:
1515
self._changes_aborted = True
1516
raise errors.InconsistentDelta(new_path, file_id,
1517
'working tree does not contain new entry')
1518
if real_add and entry[1][1][0] not in absent:
1519
self._changes_aborted = True
1520
raise errors.InconsistentDelta(new_path, file_id,
1521
'The entry was considered to be a genuinely new record,'
1522
' but there was already an old record for it.')
1523
# We don't need to update the target of an 'r' because the handling
1524
# of renames turns all 'r' situations into a delete at the original
1526
entry[1][1] = new_details
1528
def _update_basis_apply_changes(self, changes):
1529
"""Apply a sequence of changes to tree 1 during update_basis_by_delta.
1531
:param adds: A sequence of changes. Each change is a tuple:
1532
(path_utf8, path_utf8, file_id, (entry_details))
1535
for old_path, new_path, file_id, new_details in changes:
1536
# the entry for this file_id must be in tree 0.
1537
entry = self._get_entry(0, file_id, new_path)
1538
if entry[0] is None or entry[0][2] != file_id:
1539
self._changes_aborted = True
1540
raise errors.InconsistentDelta(new_path, file_id,
1541
'working tree does not contain new entry')
1542
if (entry[1][0][0] in absent or
1543
entry[1][1][0] in absent):
1544
self._changes_aborted = True
1545
raise errors.InconsistentDelta(new_path, file_id,
1546
'changed considered absent')
1547
entry[1][1] = new_details
1549
def _update_basis_apply_deletes(self, deletes):
1550
"""Apply a sequence of deletes to tree 1 during update_basis_by_delta.
1552
They may be deletes, or renames that have been split into add/delete
1555
:param deletes: A sequence of deletes. Each delete is a tuple:
1556
(old_path_utf8, new_path_utf8, file_id, None, real_delete).
1557
real_delete is True when the desired outcome is an actual deletion
1558
rather than the rename handling logic temporarily deleting a path
1559
during the replacement of a parent.
1561
null = DirState.NULL_PARENT_DETAILS
1562
for old_path, new_path, file_id, _, real_delete in deletes:
1563
if real_delete != (new_path is None):
1564
raise AssertionError("bad delete delta")
1565
# the entry for this file_id must be in tree 1.
1566
dirname, basename = osutils.split(old_path)
1567
block_index, entry_index, dir_present, file_present = \
1568
self._get_block_entry_index(dirname, basename, 1)
1569
if not file_present:
1570
self._changes_aborted = True
1571
raise errors.InconsistentDelta(old_path, file_id,
1572
'basis tree does not contain removed entry')
1573
entry = self._dirblocks[block_index][1][entry_index]
1574
if entry[0][2] != file_id:
1575
self._changes_aborted = True
1576
raise errors.InconsistentDelta(old_path, file_id,
1577
'mismatched file_id in tree 1')
1579
if entry[1][0][0] != 'a':
1580
self._changes_aborted = True
1581
raise errors.InconsistentDelta(old_path, file_id,
1582
'This was marked as a real delete, but the WT state'
1583
' claims that it still exists and is versioned.')
1584
del self._dirblocks[block_index][1][entry_index]
1586
if entry[1][0][0] == 'a':
1587
self._changes_aborted = True
1588
raise errors.InconsistentDelta(old_path, file_id,
1589
'The entry was considered a rename, but the source path'
1590
' is marked as absent.')
1591
# For whatever reason, we were asked to rename an entry
1592
# that was originally marked as deleted. This could be
1593
# because we are renaming the parent directory, and the WT
1594
# current state has the file marked as deleted.
1595
elif entry[1][0][0] == 'r':
1596
# implement the rename
1597
del self._dirblocks[block_index][1][entry_index]
1599
# it is being resurrected here, so blank it out temporarily.
1600
self._dirblocks[block_index][1][entry_index][1][1] = null
1602
def _update_basis_check_parents(self, parents):
1603
"""Check that parents required by the delta are all intact."""
1604
for dirname, file_id in parents:
1605
# Get the entry - the ensures that file_id, dirname exists and has
1606
# the right file id.
1607
entry = self._get_entry(1, file_id, dirname)
1608
if entry[1] is None:
1609
self._changes_aborted = True
1610
raise errors.InconsistentDelta(dirname, file_id,
1611
"This parent is not present.")
1612
# Parents of things must be directories
1613
if entry[1][1][0] != 'd':
1614
self._changes_aborted = True
1615
raise errors.InconsistentDelta(dirname, file_id,
1616
"This parent is not a directory.")
1618
def _observed_sha1(self, entry, sha1, stat_value,
1619
_stat_to_minikind=_stat_to_minikind, _pack_stat=pack_stat):
1620
"""Note the sha1 of a file.
1622
:param entry: The entry the sha1 is for.
1623
:param sha1: The observed sha1.
1624
:param stat_value: The os.lstat for the file.
1627
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
1631
packed_stat = _pack_stat(stat_value)
1633
if self._cutoff_time is None:
1634
self._sha_cutoff_time()
1635
if (stat_value.st_mtime < self._cutoff_time
1636
and stat_value.st_ctime < self._cutoff_time):
1637
entry[1][0] = ('f', sha1, entry[1][0][2], entry[1][0][3],
1639
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1641
def _sha_cutoff_time(self):
1642
"""Return cutoff time.
1644
Files modified more recently than this time are at risk of being
1645
undetectably modified and so can't be cached.
1647
# Cache the cutoff time as long as we hold a lock.
1648
# time.time() isn't super expensive (approx 3.38us), but
1649
# when you call it 50,000 times it adds up.
1650
# For comparison, os.lstat() costs 7.2us if it is hot.
1651
self._cutoff_time = int(time.time()) - 3
1652
return self._cutoff_time
1654
def _lstat(self, abspath, entry):
1655
"""Return the os.lstat value for this path."""
1656
return os.lstat(abspath)
1658
def _sha1_file_and_mutter(self, abspath):
1659
# when -Dhashcache is turned on, this is monkey-patched in to log
1661
trace.mutter("dirstate sha1 " + abspath)
1662
return self._sha1_provider.sha1(abspath)
1664
def _is_executable(self, mode, old_executable):
1665
"""Is this file executable?"""
1666
return bool(S_IEXEC & mode)
1668
def _is_executable_win32(self, mode, old_executable):
1669
"""On win32 the executable bit is stored in the dirstate."""
1670
return old_executable
1672
if sys.platform == 'win32':
1673
_is_executable = _is_executable_win32
1675
def _read_link(self, abspath, old_link):
1676
"""Read the target of a symlink"""
1677
# TODO: jam 200700301 On Win32, this could just return the value
1678
# already in memory. However, this really needs to be done at a
1679
# higher level, because there either won't be anything on disk,
1680
# or the thing on disk will be a file.
1681
fs_encoding = osutils._fs_enc
1682
if isinstance(abspath, unicode):
1683
# abspath is defined as the path to pass to lstat. readlink is
1684
# buggy in python < 2.6 (it doesn't encode unicode path into FS
1685
# encoding), so we need to encode ourselves knowing that unicode
1686
# paths are produced by UnicodeDirReader on purpose.
1687
abspath = abspath.encode(fs_encoding)
1688
target = os.readlink(abspath)
1689
if fs_encoding not in ('UTF-8', 'US-ASCII', 'ANSI_X3.4-1968'):
1690
# Change encoding if needed
1691
target = target.decode(fs_encoding).encode('UTF-8')
1694
def get_ghosts(self):
1695
"""Return a list of the parent tree revision ids that are ghosts."""
1696
self._read_header_if_needed()
1699
def get_lines(self):
1700
"""Serialise the entire dirstate to a sequence of lines."""
1701
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1702
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1703
# read whats on disk.
1704
self._state_file.seek(0)
1705
return self._state_file.readlines()
1707
lines.append(self._get_parents_line(self.get_parent_ids()))
1708
lines.append(self._get_ghosts_line(self._ghosts))
1709
# append the root line which is special cased
1710
lines.extend(map(self._entry_to_line, self._iter_entries()))
1711
return self._get_output_lines(lines)
1713
def _get_ghosts_line(self, ghost_ids):
1714
"""Create a line for the state file for ghost information."""
1715
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1717
def _get_parents_line(self, parent_ids):
1718
"""Create a line for the state file for parents information."""
1719
return '\0'.join([str(len(parent_ids))] + parent_ids)
1721
def _get_fields_to_entry(self):
1722
"""Get a function which converts entry fields into a entry record.
1724
This handles size and executable, as well as parent records.
1726
:return: A function which takes a list of fields, and returns an
1727
appropriate record for storing in memory.
1729
# This is intentionally unrolled for performance
1730
num_present_parents = self._num_present_parents()
1731
if num_present_parents == 0:
1732
def fields_to_entry_0_parents(fields, _int=int):
1733
path_name_file_id_key = (fields[0], fields[1], fields[2])
1734
return (path_name_file_id_key, [
1736
fields[3], # minikind
1737
fields[4], # fingerprint
1738
_int(fields[5]), # size
1739
fields[6] == 'y', # executable
1740
fields[7], # packed_stat or revision_id
1742
return fields_to_entry_0_parents
1743
elif num_present_parents == 1:
1744
def fields_to_entry_1_parent(fields, _int=int):
1745
path_name_file_id_key = (fields[0], fields[1], fields[2])
1746
return (path_name_file_id_key, [
1748
fields[3], # minikind
1749
fields[4], # fingerprint
1750
_int(fields[5]), # size
1751
fields[6] == 'y', # executable
1752
fields[7], # packed_stat or revision_id
1755
fields[8], # minikind
1756
fields[9], # fingerprint
1757
_int(fields[10]), # size
1758
fields[11] == 'y', # executable
1759
fields[12], # packed_stat or revision_id
1762
return fields_to_entry_1_parent
1763
elif num_present_parents == 2:
1764
def fields_to_entry_2_parents(fields, _int=int):
1765
path_name_file_id_key = (fields[0], fields[1], fields[2])
1766
return (path_name_file_id_key, [
1768
fields[3], # minikind
1769
fields[4], # fingerprint
1770
_int(fields[5]), # size
1771
fields[6] == 'y', # executable
1772
fields[7], # packed_stat or revision_id
1775
fields[8], # minikind
1776
fields[9], # fingerprint
1777
_int(fields[10]), # size
1778
fields[11] == 'y', # executable
1779
fields[12], # packed_stat or revision_id
1782
fields[13], # minikind
1783
fields[14], # fingerprint
1784
_int(fields[15]), # size
1785
fields[16] == 'y', # executable
1786
fields[17], # packed_stat or revision_id
1789
return fields_to_entry_2_parents
1791
def fields_to_entry_n_parents(fields, _int=int):
1792
path_name_file_id_key = (fields[0], fields[1], fields[2])
1793
trees = [(fields[cur], # minikind
1794
fields[cur+1], # fingerprint
1795
_int(fields[cur+2]), # size
1796
fields[cur+3] == 'y', # executable
1797
fields[cur+4], # stat or revision_id
1798
) for cur in xrange(3, len(fields)-1, 5)]
1799
return path_name_file_id_key, trees
1800
return fields_to_entry_n_parents
1802
def get_parent_ids(self):
1803
"""Return a list of the parent tree ids for the directory state."""
1804
self._read_header_if_needed()
1805
return list(self._parents)
1807
def _get_block_entry_index(self, dirname, basename, tree_index):
1808
"""Get the coordinates for a path in the state structure.
1810
:param dirname: The utf8 dirname to lookup.
1811
:param basename: The utf8 basename to lookup.
1812
:param tree_index: The index of the tree for which this lookup should
1814
:return: A tuple describing where the path is located, or should be
1815
inserted. The tuple contains four fields: the block index, the row
1816
index, the directory is present (boolean), the entire path is
1817
present (boolean). There is no guarantee that either
1818
coordinate is currently reachable unless the found field for it is
1819
True. For instance, a directory not present in the searched tree
1820
may be returned with a value one greater than the current highest
1821
block offset. The directory present field will always be True when
1822
the path present field is True. The directory present field does
1823
NOT indicate that the directory is present in the searched tree,
1824
rather it indicates that there are at least some files in some
1827
self._read_dirblocks_if_needed()
1828
key = dirname, basename, ''
1829
block_index, present = self._find_block_index_from_key(key)
1831
# no such directory - return the dir index and 0 for the row.
1832
return block_index, 0, False, False
1833
block = self._dirblocks[block_index][1] # access the entries only
1834
entry_index, present = self._find_entry_index(key, block)
1835
# linear search through entries at this path to find the one
1837
while entry_index < len(block) and block[entry_index][0][1] == basename:
1838
if block[entry_index][1][tree_index][0] not in 'ar':
1839
# neither absent or relocated
1840
return block_index, entry_index, True, True
1842
return block_index, entry_index, True, False
1844
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None, include_deleted=False):
1845
"""Get the dirstate entry for path in tree tree_index.
1847
If either file_id or path is supplied, it is used as the key to lookup.
1848
If both are supplied, the fastest lookup is used, and an error is
1849
raised if they do not both point at the same row.
1851
:param tree_index: The index of the tree we wish to locate this path
1852
in. If the path is present in that tree, the entry containing its
1853
details is returned, otherwise (None, None) is returned
1854
0 is the working tree, higher indexes are successive parent
1856
:param fileid_utf8: A utf8 file_id to look up.
1857
:param path_utf8: An utf8 path to be looked up.
1858
:param include_deleted: If True, and performing a lookup via
1859
fileid_utf8 rather than path_utf8, return an entry for deleted
1861
:return: The dirstate entry tuple for path, or (None, None)
1863
self._read_dirblocks_if_needed()
1864
if path_utf8 is not None:
1865
if type(path_utf8) is not str:
1866
raise AssertionError('path_utf8 is not a str: %s %s'
1867
% (type(path_utf8), path_utf8))
1868
# path lookups are faster
1869
dirname, basename = osutils.split(path_utf8)
1870
block_index, entry_index, dir_present, file_present = \
1871
self._get_block_entry_index(dirname, basename, tree_index)
1872
if not file_present:
1874
entry = self._dirblocks[block_index][1][entry_index]
1875
if not (entry[0][2] and entry[1][tree_index][0] not in ('a', 'r')):
1876
raise AssertionError('unversioned entry?')
1878
if entry[0][2] != fileid_utf8:
1879
self._changes_aborted = True
1880
raise errors.BzrError('integrity error ? : mismatching'
1881
' tree_index, file_id and path')
1884
possible_keys = self._get_id_index().get(fileid_utf8, None)
1885
if not possible_keys:
1887
for key in possible_keys:
1888
block_index, present = \
1889
self._find_block_index_from_key(key)
1890
# strange, probably indicates an out of date
1891
# id index - for now, allow this.
1894
# WARNING: DO not change this code to use _get_block_entry_index
1895
# as that function is not suitable: it does not use the key
1896
# to lookup, and thus the wrong coordinates are returned.
1897
block = self._dirblocks[block_index][1]
1898
entry_index, present = self._find_entry_index(key, block)
1900
entry = self._dirblocks[block_index][1][entry_index]
1901
if entry[1][tree_index][0] in 'fdlt':
1902
# this is the result we are looking for: the
1903
# real home of this file_id in this tree.
1905
if entry[1][tree_index][0] == 'a':
1906
# there is no home for this entry in this tree
1910
if entry[1][tree_index][0] != 'r':
1911
raise AssertionError(
1912
"entry %r has invalid minikind %r for tree %r" \
1914
entry[1][tree_index][0],
1916
real_path = entry[1][tree_index][1]
1917
return self._get_entry(tree_index, fileid_utf8=fileid_utf8,
1918
path_utf8=real_path)
1922
def initialize(cls, path, sha1_provider=None):
1923
"""Create a new dirstate on path.
1925
The new dirstate will be an empty tree - that is it has no parents,
1926
and only a root node - which has id ROOT_ID.
1928
:param path: The name of the file for the dirstate.
1929
:param sha1_provider: an object meeting the SHA1Provider interface.
1930
If None, a DefaultSHA1Provider is used.
1931
:return: A write-locked DirState object.
1933
# This constructs a new DirState object on a path, sets the _state_file
1934
# to a new empty file for that path. It then calls _set_data() with our
1935
# stock empty dirstate information - a root with ROOT_ID, no children,
1936
# and no parents. Finally it calls save() to ensure that this data will
1938
if sha1_provider is None:
1939
sha1_provider = DefaultSHA1Provider()
1940
result = cls(path, sha1_provider)
1941
# root dir and root dir contents with no children.
1942
empty_tree_dirblocks = [('', []), ('', [])]
1943
# a new root directory, with a NULLSTAT.
1944
empty_tree_dirblocks[0][1].append(
1945
(('', '', inventory.ROOT_ID), [
1946
('d', '', 0, False, DirState.NULLSTAT),
1950
result._set_data([], empty_tree_dirblocks)
1958
def _inv_entry_to_details(inv_entry):
1959
"""Convert an inventory entry (from a revision tree) to state details.
1961
:param inv_entry: An inventory entry whose sha1 and link targets can be
1962
relied upon, and which has a revision set.
1963
:return: A details tuple - the details for a single tree at a path +
1966
kind = inv_entry.kind
1967
minikind = DirState._kind_to_minikind[kind]
1968
tree_data = inv_entry.revision
1969
if kind == 'directory':
1973
elif kind == 'symlink':
1974
if inv_entry.symlink_target is None:
1977
fingerprint = inv_entry.symlink_target.encode('utf8')
1980
elif kind == 'file':
1981
fingerprint = inv_entry.text_sha1 or ''
1982
size = inv_entry.text_size or 0
1983
executable = inv_entry.executable
1984
elif kind == 'tree-reference':
1985
fingerprint = inv_entry.reference_revision or ''
1989
raise Exception("can't pack %s" % inv_entry)
1990
return (minikind, fingerprint, size, executable, tree_data)
1992
def _iter_child_entries(self, tree_index, path_utf8):
1993
"""Iterate over all the entries that are children of path_utf.
1995
This only returns entries that are present (not in 'a', 'r') in
1996
tree_index. tree_index data is not refreshed, so if tree 0 is used,
1997
results may differ from that obtained if paths were statted to
1998
determine what ones were directories.
2000
Asking for the children of a non-directory will return an empty
2004
next_pending_dirs = [path_utf8]
2006
while next_pending_dirs:
2007
pending_dirs = next_pending_dirs
2008
next_pending_dirs = []
2009
for path in pending_dirs:
2010
block_index, present = self._find_block_index_from_key(
2012
if block_index == 0:
2014
if len(self._dirblocks) == 1:
2015
# asked for the children of the root with no other
2019
# children of a non-directory asked for.
2021
block = self._dirblocks[block_index]
2022
for entry in block[1]:
2023
kind = entry[1][tree_index][0]
2024
if kind not in absent:
2028
path = entry[0][0] + '/' + entry[0][1]
2031
next_pending_dirs.append(path)
2033
def _iter_entries(self):
2034
"""Iterate over all the entries in the dirstate.
2036
Each yelt item is an entry in the standard format described in the
2037
docstring of bzrlib.dirstate.
2039
self._read_dirblocks_if_needed()
2040
for directory in self._dirblocks:
2041
for entry in directory[1]:
2044
def _get_id_index(self):
2045
"""Get an id index of self._dirblocks."""
2046
if self._id_index is None:
2048
for key, tree_details in self._iter_entries():
2049
id_index.setdefault(key[2], set()).add(key)
2050
self._id_index = id_index
2051
return self._id_index
2053
def _get_output_lines(self, lines):
2054
"""Format lines for final output.
2056
:param lines: A sequence of lines containing the parents list and the
2059
output_lines = [DirState.HEADER_FORMAT_3]
2060
lines.append('') # a final newline
2061
inventory_text = '\0\n\0'.join(lines)
2062
output_lines.append('crc32: %s\n' % (zlib.crc32(inventory_text),))
2063
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
2064
num_entries = len(lines)-3
2065
output_lines.append('num_entries: %s\n' % (num_entries,))
2066
output_lines.append(inventory_text)
2069
def _make_deleted_row(self, fileid_utf8, parents):
2070
"""Return a deleted row for fileid_utf8."""
2071
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
2074
def _num_present_parents(self):
2075
"""The number of parent entries in each record row."""
2076
return len(self._parents) - len(self._ghosts)
2079
def on_file(path, sha1_provider=None):
2080
"""Construct a DirState on the file at path "path".
2082
:param path: The path at which the dirstate file on disk should live.
2083
:param sha1_provider: an object meeting the SHA1Provider interface.
2084
If None, a DefaultSHA1Provider is used.
2085
:return: An unlocked DirState object, associated with the given path.
2087
if sha1_provider is None:
2088
sha1_provider = DefaultSHA1Provider()
2089
result = DirState(path, sha1_provider)
2092
def _read_dirblocks_if_needed(self):
2093
"""Read in all the dirblocks from the file if they are not in memory.
2095
This populates self._dirblocks, and sets self._dirblock_state to
2096
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
2099
self._read_header_if_needed()
2100
if self._dirblock_state == DirState.NOT_IN_MEMORY:
2101
_read_dirblocks(self)
2103
def _read_header(self):
2104
"""This reads in the metadata header, and the parent ids.
2106
After reading in, the file should be positioned at the null
2107
just before the start of the first record in the file.
2109
:return: (expected crc checksum, number of entries, parent list)
2111
self._read_prelude()
2112
parent_line = self._state_file.readline()
2113
info = parent_line.split('\0')
2114
num_parents = int(info[0])
2115
self._parents = info[1:-1]
2116
ghost_line = self._state_file.readline()
2117
info = ghost_line.split('\0')
2118
num_ghosts = int(info[1])
2119
self._ghosts = info[2:-1]
2120
self._header_state = DirState.IN_MEMORY_UNMODIFIED
2121
self._end_of_header = self._state_file.tell()
2123
def _read_header_if_needed(self):
2124
"""Read the header of the dirstate file if needed."""
2125
# inline this as it will be called a lot
2126
if not self._lock_token:
2127
raise errors.ObjectNotLocked(self)
2128
if self._header_state == DirState.NOT_IN_MEMORY:
2131
def _read_prelude(self):
2132
"""Read in the prelude header of the dirstate file.
2134
This only reads in the stuff that is not connected to the crc
2135
checksum. The position will be correct to read in the rest of
2136
the file and check the checksum after this point.
2137
The next entry in the file should be the number of parents,
2138
and their ids. Followed by a newline.
2140
header = self._state_file.readline()
2141
if header != DirState.HEADER_FORMAT_3:
2142
raise errors.BzrError(
2143
'invalid header line: %r' % (header,))
2144
crc_line = self._state_file.readline()
2145
if not crc_line.startswith('crc32: '):
2146
raise errors.BzrError('missing crc32 checksum: %r' % crc_line)
2147
self.crc_expected = int(crc_line[len('crc32: '):-1])
2148
num_entries_line = self._state_file.readline()
2149
if not num_entries_line.startswith('num_entries: '):
2150
raise errors.BzrError('missing num_entries line')
2151
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
2153
def sha1_from_stat(self, path, stat_result, _pack_stat=pack_stat):
2154
"""Find a sha1 given a stat lookup."""
2155
return self._get_packed_stat_index().get(_pack_stat(stat_result), None)
2157
def _get_packed_stat_index(self):
2158
"""Get a packed_stat index of self._dirblocks."""
2159
if self._packed_stat_index is None:
2161
for key, tree_details in self._iter_entries():
2162
if tree_details[0][0] == 'f':
2163
index[tree_details[0][4]] = tree_details[0][1]
2164
self._packed_stat_index = index
2165
return self._packed_stat_index
2168
"""Save any pending changes created during this session.
2170
We reuse the existing file, because that prevents race conditions with
2171
file creation, and use oslocks on it to prevent concurrent modification
2172
and reads - because dirstate's incremental data aggregation is not
2173
compatible with reading a modified file, and replacing a file in use by
2174
another process is impossible on Windows.
2176
A dirstate in read only mode should be smart enough though to validate
2177
that the file has not changed, and otherwise discard its cache and
2178
start over, to allow for fine grained read lock duration, so 'status'
2179
wont block 'commit' - for example.
2181
if self._changes_aborted:
2182
# Should this be a warning? For now, I'm expecting that places that
2183
# mark it inconsistent will warn, making a warning here redundant.
2184
trace.mutter('Not saving DirState because '
2185
'_changes_aborted is set.')
2187
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
2188
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
2190
grabbed_write_lock = False
2191
if self._lock_state != 'w':
2192
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
2193
# Switch over to the new lock, as the old one may be closed.
2194
# TODO: jam 20070315 We should validate the disk file has
2195
# not changed contents. Since temporary_write_lock may
2196
# not be an atomic operation.
2197
self._lock_token = new_lock
2198
self._state_file = new_lock.f
2199
if not grabbed_write_lock:
2200
# We couldn't grab a write lock, so we switch back to a read one
2203
self._state_file.seek(0)
2204
self._state_file.writelines(self.get_lines())
2205
self._state_file.truncate()
2206
self._state_file.flush()
2207
self._header_state = DirState.IN_MEMORY_UNMODIFIED
2208
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
2210
if grabbed_write_lock:
2211
self._lock_token = self._lock_token.restore_read_lock()
2212
self._state_file = self._lock_token.f
2213
# TODO: jam 20070315 We should validate the disk file has
2214
# not changed contents. Since restore_read_lock may
2215
# not be an atomic operation.
2217
def _set_data(self, parent_ids, dirblocks):
2218
"""Set the full dirstate data in memory.
2220
This is an internal function used to completely replace the objects
2221
in memory state. It puts the dirstate into state 'full-dirty'.
2223
:param parent_ids: A list of parent tree revision ids.
2224
:param dirblocks: A list containing one tuple for each directory in the
2225
tree. Each tuple contains the directory path and a list of entries
2226
found in that directory.
2228
# our memory copy is now authoritative.
2229
self._dirblocks = dirblocks
2230
self._header_state = DirState.IN_MEMORY_MODIFIED
2231
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2232
self._parents = list(parent_ids)
2233
self._id_index = None
2234
self._packed_stat_index = None
2236
def set_path_id(self, path, new_id):
2237
"""Change the id of path to new_id in the current working tree.
2239
:param path: The path inside the tree to set - '' is the root, 'foo'
2240
is the path foo in the root.
2241
:param new_id: The new id to assign to the path. This must be a utf8
2242
file id (not unicode, and not None).
2244
self._read_dirblocks_if_needed()
2246
# TODO: logic not written
2247
raise NotImplementedError(self.set_path_id)
2248
# TODO: check new id is unique
2249
entry = self._get_entry(0, path_utf8=path)
2250
if entry[0][2] == new_id:
2251
# Nothing to change.
2253
# mark the old path absent, and insert a new root path
2254
self._make_absent(entry)
2255
self.update_minimal(('', '', new_id), 'd',
2256
path_utf8='', packed_stat=entry[1][0][4])
2257
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2258
if self._id_index is not None:
2259
self._id_index.setdefault(new_id, set()).add(entry[0])
2261
def set_parent_trees(self, trees, ghosts):
2262
"""Set the parent trees for the dirstate.
2264
:param trees: A list of revision_id, tree tuples. tree must be provided
2265
even if the revision_id refers to a ghost: supply an empty tree in
2267
:param ghosts: A list of the revision_ids that are ghosts at the time
2270
# TODO: generate a list of parent indexes to preserve to save
2271
# processing specific parent trees. In the common case one tree will
2272
# be preserved - the left most parent.
2273
# TODO: if the parent tree is a dirstate, we might want to walk them
2274
# all by path in parallel for 'optimal' common-case performance.
2275
# generate new root row.
2276
self._read_dirblocks_if_needed()
2277
# TODO future sketch: Examine the existing parents to generate a change
2278
# map and then walk the new parent trees only, mapping them into the
2279
# dirstate. Walk the dirstate at the same time to remove unreferenced
2282
# sketch: loop over all entries in the dirstate, cherry picking
2283
# entries from the parent trees, if they are not ghost trees.
2284
# after we finish walking the dirstate, all entries not in the dirstate
2285
# are deletes, so we want to append them to the end as per the design
2286
# discussions. So do a set difference on ids with the parents to
2287
# get deletes, and add them to the end.
2288
# During the update process we need to answer the following questions:
2289
# - find other keys containing a fileid in order to create cross-path
2290
# links. We dont't trivially use the inventory from other trees
2291
# because this leads to either double touching, or to accessing
2293
# - find other keys containing a path
2294
# We accumulate each entry via this dictionary, including the root
2297
# we could do parallel iterators, but because file id data may be
2298
# scattered throughout, we dont save on index overhead: we have to look
2299
# at everything anyway. We can probably save cycles by reusing parent
2300
# data and doing an incremental update when adding an additional
2301
# parent, but for now the common cases are adding a new parent (merge),
2302
# and replacing completely (commit), and commit is more common: so
2303
# optimise merge later.
2305
# ---- start generation of full tree mapping data
2306
# what trees should we use?
2307
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
2308
# how many trees do we end up with
2309
parent_count = len(parent_trees)
2311
# one: the current tree
2312
for entry in self._iter_entries():
2313
# skip entries not in the current tree
2314
if entry[1][0][0] in 'ar': # absent, relocated
2316
by_path[entry[0]] = [entry[1][0]] + \
2317
[DirState.NULL_PARENT_DETAILS] * parent_count
2318
id_index[entry[0][2]] = set([entry[0]])
2320
# now the parent trees:
2321
for tree_index, tree in enumerate(parent_trees):
2322
# the index is off by one, adjust it.
2323
tree_index = tree_index + 1
2324
# when we add new locations for a fileid we need these ranges for
2325
# any fileid in this tree as we set the by_path[id] to:
2326
# already_processed_tree_details + new_details + new_location_suffix
2327
# the suffix is from tree_index+1:parent_count+1.
2328
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
2329
# now stitch in all the entries from this tree
2330
for path, entry in tree.inventory.iter_entries_by_dir():
2331
# here we process each trees details for each item in the tree.
2332
# we first update any existing entries for the id at other paths,
2333
# then we either create or update the entry for the id at the
2334
# right path, and finally we add (if needed) a mapping from
2335
# file_id to this path. We do it in this order to allow us to
2336
# avoid checking all known paths for the id when generating a
2337
# new entry at this path: by adding the id->path mapping last,
2338
# all the mappings are valid and have correct relocation
2339
# records where needed.
2340
file_id = entry.file_id
2341
path_utf8 = path.encode('utf8')
2342
dirname, basename = osutils.split(path_utf8)
2343
new_entry_key = (dirname, basename, file_id)
2344
# tree index consistency: All other paths for this id in this tree
2345
# index must point to the correct path.
2346
for entry_key in id_index.setdefault(file_id, set()):
2347
# TODO:PROFILING: It might be faster to just update
2348
# rather than checking if we need to, and then overwrite
2349
# the one we are located at.
2350
if entry_key != new_entry_key:
2351
# this file id is at a different path in one of the
2352
# other trees, so put absent pointers there
2353
# This is the vertical axis in the matrix, all pointing
2355
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
2356
# by path consistency: Insert into an existing path record (trivial), or
2357
# add a new one with relocation pointers for the other tree indexes.
2358
if new_entry_key in id_index[file_id]:
2359
# there is already an entry where this data belongs, just insert it.
2360
by_path[new_entry_key][tree_index] = \
2361
self._inv_entry_to_details(entry)
2363
# add relocated entries to the horizontal axis - this row
2364
# mapping from path,id. We need to look up the correct path
2365
# for the indexes from 0 to tree_index -1
2367
for lookup_index in xrange(tree_index):
2368
# boundary case: this is the first occurence of file_id
2369
# so there are no id_indexs, possibly take this out of
2371
if not len(id_index[file_id]):
2372
new_details.append(DirState.NULL_PARENT_DETAILS)
2374
# grab any one entry, use it to find the right path.
2375
# TODO: optimise this to reduce memory use in highly
2376
# fragmented situations by reusing the relocation
2378
a_key = iter(id_index[file_id]).next()
2379
if by_path[a_key][lookup_index][0] in ('r', 'a'):
2380
# its a pointer or missing statement, use it as is.
2381
new_details.append(by_path[a_key][lookup_index])
2383
# we have the right key, make a pointer to it.
2384
real_path = ('/'.join(a_key[0:2])).strip('/')
2385
new_details.append(('r', real_path, 0, False, ''))
2386
new_details.append(self._inv_entry_to_details(entry))
2387
new_details.extend(new_location_suffix)
2388
by_path[new_entry_key] = new_details
2389
id_index[file_id].add(new_entry_key)
2390
# --- end generation of full tree mappings
2392
# sort and output all the entries
2393
new_entries = self._sort_entries(by_path.items())
2394
self._entries_to_current_state(new_entries)
2395
self._parents = [rev_id for rev_id, tree in trees]
2396
self._ghosts = list(ghosts)
2397
self._header_state = DirState.IN_MEMORY_MODIFIED
2398
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2399
self._id_index = id_index
2401
def _sort_entries(self, entry_list):
2402
"""Given a list of entries, sort them into the right order.
2404
This is done when constructing a new dirstate from trees - normally we
2405
try to keep everything in sorted blocks all the time, but sometimes
2406
it's easier to sort after the fact.
2409
# sort by: directory parts, file name, file id
2410
return entry[0][0].split('/'), entry[0][1], entry[0][2]
2411
return sorted(entry_list, key=_key)
2413
def set_state_from_inventory(self, new_inv):
2414
"""Set new_inv as the current state.
2416
This API is called by tree transform, and will usually occur with
2417
existing parent trees.
2419
:param new_inv: The inventory object to set current state from.
2421
if 'evil' in debug.debug_flags:
2422
trace.mutter_callsite(1,
2423
"set_state_from_inventory called; please mutate the tree instead")
2424
self._read_dirblocks_if_needed()
2426
# Two iterators: current data and new data, both in dirblock order.
2427
# We zip them together, which tells about entries that are new in the
2428
# inventory, or removed in the inventory, or present in both and
2431
# You might think we could just synthesize a new dirstate directly
2432
# since we're processing it in the right order. However, we need to
2433
# also consider there may be any number of parent trees and relocation
2434
# pointers, and we don't want to duplicate that here.
2435
new_iterator = new_inv.iter_entries_by_dir()
2436
# we will be modifying the dirstate, so we need a stable iterator. In
2437
# future we might write one, for now we just clone the state into a
2438
# list - which is a shallow copy.
2439
old_iterator = iter(list(self._iter_entries()))
2440
# both must have roots so this is safe:
2441
current_new = new_iterator.next()
2442
current_old = old_iterator.next()
2443
def advance(iterator):
2445
return iterator.next()
2446
except StopIteration:
2448
while current_new or current_old:
2449
# skip entries in old that are not really there
2450
if current_old and current_old[1][0][0] in 'ar':
2451
# relocated or absent
2452
current_old = advance(old_iterator)
2455
# convert new into dirblock style
2456
new_path_utf8 = current_new[0].encode('utf8')
2457
new_dirname, new_basename = osutils.split(new_path_utf8)
2458
new_id = current_new[1].file_id
2459
new_entry_key = (new_dirname, new_basename, new_id)
2460
current_new_minikind = \
2461
DirState._kind_to_minikind[current_new[1].kind]
2462
if current_new_minikind == 't':
2463
fingerprint = current_new[1].reference_revision or ''
2465
# We normally only insert or remove records, or update
2466
# them when it has significantly changed. Then we want to
2467
# erase its fingerprint. Unaffected records should
2468
# normally not be updated at all.
2471
# for safety disable variables
2472
new_path_utf8 = new_dirname = new_basename = new_id = \
2473
new_entry_key = None
2474
# 5 cases, we dont have a value that is strictly greater than everything, so
2475
# we make both end conditions explicit
2477
# old is finished: insert current_new into the state.
2478
self.update_minimal(new_entry_key, current_new_minikind,
2479
executable=current_new[1].executable,
2480
path_utf8=new_path_utf8, fingerprint=fingerprint)
2481
current_new = advance(new_iterator)
2482
elif not current_new:
2484
self._make_absent(current_old)
2485
current_old = advance(old_iterator)
2486
elif new_entry_key == current_old[0]:
2487
# same - common case
2488
# We're looking at the same path and id in both the dirstate
2489
# and inventory, so just need to update the fields in the
2490
# dirstate from the one in the inventory.
2491
# TODO: update the record if anything significant has changed.
2492
# the minimal required trigger is if the execute bit or cached
2494
if (current_old[1][0][3] != current_new[1].executable or
2495
current_old[1][0][0] != current_new_minikind):
2496
self.update_minimal(current_old[0], current_new_minikind,
2497
executable=current_new[1].executable,
2498
path_utf8=new_path_utf8, fingerprint=fingerprint)
2499
# both sides are dealt with, move on
2500
current_old = advance(old_iterator)
2501
current_new = advance(new_iterator)
2502
elif (cmp_by_dirs(new_dirname, current_old[0][0]) < 0
2503
or (new_dirname == current_old[0][0]
2504
and new_entry_key[1:] < current_old[0][1:])):
2506
# add a entry for this and advance new
2507
self.update_minimal(new_entry_key, current_new_minikind,
2508
executable=current_new[1].executable,
2509
path_utf8=new_path_utf8, fingerprint=fingerprint)
2510
current_new = advance(new_iterator)
2512
# we've advanced past the place where the old key would be,
2513
# without seeing it in the new list. so it must be gone.
2514
self._make_absent(current_old)
2515
current_old = advance(old_iterator)
2516
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2517
self._id_index = None
2518
self._packed_stat_index = None
2520
def _make_absent(self, current_old):
2521
"""Mark current_old - an entry - as absent for tree 0.
2523
:return: True if this was the last details entry for the entry key:
2524
that is, if the underlying block has had the entry removed, thus
2525
shrinking in length.
2527
# build up paths that this id will be left at after the change is made,
2528
# so we can update their cross references in tree 0
2529
all_remaining_keys = set()
2530
# Dont check the working tree, because it's going.
2531
for details in current_old[1][1:]:
2532
if details[0] not in 'ar': # absent, relocated
2533
all_remaining_keys.add(current_old[0])
2534
elif details[0] == 'r': # relocated
2535
# record the key for the real path.
2536
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
2537
# absent rows are not present at any path.
2538
last_reference = current_old[0] not in all_remaining_keys
2540
# the current row consists entire of the current item (being marked
2541
# absent), and relocated or absent entries for the other trees:
2542
# Remove it, its meaningless.
2543
block = self._find_block(current_old[0])
2544
entry_index, present = self._find_entry_index(current_old[0], block[1])
2546
raise AssertionError('could not find entry for %s' % (current_old,))
2547
block[1].pop(entry_index)
2548
# if we have an id_index in use, remove this key from it for this id.
2549
if self._id_index is not None:
2550
self._id_index[current_old[0][2]].remove(current_old[0])
2551
# update all remaining keys for this id to record it as absent. The
2552
# existing details may either be the record we are marking as deleted
2553
# (if there were other trees with the id present at this path), or may
2555
for update_key in all_remaining_keys:
2556
update_block_index, present = \
2557
self._find_block_index_from_key(update_key)
2559
raise AssertionError('could not find block for %s' % (update_key,))
2560
update_entry_index, present = \
2561
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
2563
raise AssertionError('could not find entry for %s' % (update_key,))
2564
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
2565
# it must not be absent at the moment
2566
if update_tree_details[0][0] == 'a': # absent
2567
raise AssertionError('bad row %r' % (update_tree_details,))
2568
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
2569
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2570
return last_reference
2572
def update_minimal(self, key, minikind, executable=False, fingerprint='',
2573
packed_stat=None, size=0, path_utf8=None):
2574
"""Update an entry to the state in tree 0.
2576
This will either create a new entry at 'key' or update an existing one.
2577
It also makes sure that any other records which might mention this are
2580
:param key: (dir, name, file_id) for the new entry
2581
:param minikind: The type for the entry ('f' == 'file', 'd' ==
2583
:param executable: Should the executable bit be set?
2584
:param fingerprint: Simple fingerprint for new entry: canonical-form
2585
sha1 for files, referenced revision id for subtrees, etc.
2586
:param packed_stat: Packed stat value for new entry.
2587
:param size: Size information for new entry
2588
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
2591
If packed_stat and fingerprint are not given, they're invalidated in
2594
block = self._find_block(key)[1]
2595
if packed_stat is None:
2596
packed_stat = DirState.NULLSTAT
2597
# XXX: Some callers pass '' as the packed_stat, and it seems to be
2598
# sometimes present in the dirstate - this seems oddly inconsistent.
2600
entry_index, present = self._find_entry_index(key, block)
2601
new_details = (minikind, fingerprint, size, executable, packed_stat)
2602
id_index = self._get_id_index()
2604
# new entry, synthesis cross reference here,
2605
existing_keys = id_index.setdefault(key[2], set())
2606
if not existing_keys:
2607
# not currently in the state, simplest case
2608
new_entry = key, [new_details] + self._empty_parent_info()
2610
# present at one or more existing other paths.
2611
# grab one of them and use it to generate parent
2612
# relocation/absent entries.
2613
new_entry = key, [new_details]
2614
for other_key in existing_keys:
2615
# change the record at other to be a pointer to this new
2616
# record. The loop looks similar to the change to
2617
# relocations when updating an existing record but its not:
2618
# the test for existing kinds is different: this can be
2619
# factored out to a helper though.
2620
other_block_index, present = self._find_block_index_from_key(other_key)
2622
raise AssertionError('could not find block for %s' % (other_key,))
2623
other_entry_index, present = self._find_entry_index(other_key,
2624
self._dirblocks[other_block_index][1])
2626
raise AssertionError('could not find entry for %s' % (other_key,))
2627
if path_utf8 is None:
2628
raise AssertionError('no path')
2629
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
2630
('r', path_utf8, 0, False, '')
2632
num_present_parents = self._num_present_parents()
2633
for lookup_index in xrange(1, num_present_parents + 1):
2634
# grab any one entry, use it to find the right path.
2635
# TODO: optimise this to reduce memory use in highly
2636
# fragmented situations by reusing the relocation
2638
update_block_index, present = \
2639
self._find_block_index_from_key(other_key)
2641
raise AssertionError('could not find block for %s' % (other_key,))
2642
update_entry_index, present = \
2643
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
2645
raise AssertionError('could not find entry for %s' % (other_key,))
2646
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
2647
if update_details[0] in 'ar': # relocated, absent
2648
# its a pointer or absent in lookup_index's tree, use
2650
new_entry[1].append(update_details)
2652
# we have the right key, make a pointer to it.
2653
pointer_path = osutils.pathjoin(*other_key[0:2])
2654
new_entry[1].append(('r', pointer_path, 0, False, ''))
2655
block.insert(entry_index, new_entry)
2656
existing_keys.add(key)
2658
# Does the new state matter?
2659
block[entry_index][1][0] = new_details
2660
# parents cannot be affected by what we do.
2661
# other occurences of this id can be found
2662
# from the id index.
2664
# tree index consistency: All other paths for this id in this tree
2665
# index must point to the correct path. We have to loop here because
2666
# we may have passed entries in the state with this file id already
2667
# that were absent - where parent entries are - and they need to be
2668
# converted to relocated.
2669
if path_utf8 is None:
2670
raise AssertionError('no path')
2671
for entry_key in id_index.setdefault(key[2], set()):
2672
# TODO:PROFILING: It might be faster to just update
2673
# rather than checking if we need to, and then overwrite
2674
# the one we are located at.
2675
if entry_key != key:
2676
# this file id is at a different path in one of the
2677
# other trees, so put absent pointers there
2678
# This is the vertical axis in the matrix, all pointing
2680
block_index, present = self._find_block_index_from_key(entry_key)
2682
raise AssertionError('not present: %r', entry_key)
2683
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
2685
raise AssertionError('not present: %r', entry_key)
2686
self._dirblocks[block_index][1][entry_index][1][0] = \
2687
('r', path_utf8, 0, False, '')
2688
# add a containing dirblock if needed.
2689
if new_details[0] == 'd':
2690
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
2691
block_index, present = self._find_block_index_from_key(subdir_key)
2693
self._dirblocks.insert(block_index, (subdir_key[0], []))
2695
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2697
def _validate(self):
2698
"""Check that invariants on the dirblock are correct.
2700
This can be useful in debugging; it shouldn't be necessary in
2703
This must be called with a lock held.
2705
# NOTE: This must always raise AssertionError not just assert,
2706
# otherwise it may not behave properly under python -O
2708
# TODO: All entries must have some content that's not 'a' or 'r',
2709
# otherwise it could just be removed.
2711
# TODO: All relocations must point directly to a real entry.
2713
# TODO: No repeated keys.
2716
from pprint import pformat
2717
self._read_dirblocks_if_needed()
2718
if len(self._dirblocks) > 0:
2719
if not self._dirblocks[0][0] == '':
2720
raise AssertionError(
2721
"dirblocks don't start with root block:\n" + \
2722
pformat(self._dirblocks))
2723
if len(self._dirblocks) > 1:
2724
if not self._dirblocks[1][0] == '':
2725
raise AssertionError(
2726
"dirblocks missing root directory:\n" + \
2727
pformat(self._dirblocks))
2728
# the dirblocks are sorted by their path components, name, and dir id
2729
dir_names = [d[0].split('/')
2730
for d in self._dirblocks[1:]]
2731
if dir_names != sorted(dir_names):
2732
raise AssertionError(
2733
"dir names are not in sorted order:\n" + \
2734
pformat(self._dirblocks) + \
2737
for dirblock in self._dirblocks:
2738
# within each dirblock, the entries are sorted by filename and
2740
for entry in dirblock[1]:
2741
if dirblock[0] != entry[0][0]:
2742
raise AssertionError(
2744
"doesn't match directory name in\n%r" %
2745
(entry, pformat(dirblock)))
2746
if dirblock[1] != sorted(dirblock[1]):
2747
raise AssertionError(
2748
"dirblock for %r is not sorted:\n%s" % \
2749
(dirblock[0], pformat(dirblock)))
2751
def check_valid_parent():
2752
"""Check that the current entry has a valid parent.
2754
This makes sure that the parent has a record,
2755
and that the parent isn't marked as "absent" in the
2756
current tree. (It is invalid to have a non-absent file in an absent
2759
if entry[0][0:2] == ('', ''):
2760
# There should be no parent for the root row
2762
parent_entry = self._get_entry(tree_index, path_utf8=entry[0][0])
2763
if parent_entry == (None, None):
2764
raise AssertionError(
2765
"no parent entry for: %s in tree %s"
2766
% (this_path, tree_index))
2767
if parent_entry[1][tree_index][0] != 'd':
2768
raise AssertionError(
2769
"Parent entry for %s is not marked as a valid"
2770
" directory. %s" % (this_path, parent_entry,))
2772
# For each file id, for each tree: either
2773
# the file id is not present at all; all rows with that id in the
2774
# key have it marked as 'absent'
2775
# OR the file id is present under exactly one name; any other entries
2776
# that mention that id point to the correct name.
2778
# We check this with a dict per tree pointing either to the present
2779
# name, or None if absent.
2780
tree_count = self._num_present_parents() + 1
2781
id_path_maps = [dict() for i in range(tree_count)]
2782
# Make sure that all renamed entries point to the correct location.
2783
for entry in self._iter_entries():
2784
file_id = entry[0][2]
2785
this_path = osutils.pathjoin(entry[0][0], entry[0][1])
2786
if len(entry[1]) != tree_count:
2787
raise AssertionError(
2788
"wrong number of entry details for row\n%s" \
2789
",\nexpected %d" % \
2790
(pformat(entry), tree_count))
2791
absent_positions = 0
2792
for tree_index, tree_state in enumerate(entry[1]):
2793
this_tree_map = id_path_maps[tree_index]
2794
minikind = tree_state[0]
2795
if minikind in 'ar':
2796
absent_positions += 1
2797
# have we seen this id before in this column?
2798
if file_id in this_tree_map:
2799
previous_path, previous_loc = this_tree_map[file_id]
2800
# any later mention of this file must be consistent with
2801
# what was said before
2803
if previous_path is not None:
2804
raise AssertionError(
2805
"file %s is absent in row %r but also present " \
2807
(file_id, entry, previous_path))
2808
elif minikind == 'r':
2809
target_location = tree_state[1]
2810
if previous_path != target_location:
2811
raise AssertionError(
2812
"file %s relocation in row %r but also at %r" \
2813
% (file_id, entry, previous_path))
2815
# a file, directory, etc - may have been previously
2816
# pointed to by a relocation, which must point here
2817
if previous_path != this_path:
2818
raise AssertionError(
2819
"entry %r inconsistent with previous path %r "
2821
(entry, previous_path, previous_loc))
2822
check_valid_parent()
2825
# absent; should not occur anywhere else
2826
this_tree_map[file_id] = None, this_path
2827
elif minikind == 'r':
2828
# relocation, must occur at expected location
2829
this_tree_map[file_id] = tree_state[1], this_path
2831
this_tree_map[file_id] = this_path, this_path
2832
check_valid_parent()
2833
if absent_positions == tree_count:
2834
raise AssertionError(
2835
"entry %r has no data for any tree." % (entry,))
2837
def _wipe_state(self):
2838
"""Forget all state information about the dirstate."""
2839
self._header_state = DirState.NOT_IN_MEMORY
2840
self._dirblock_state = DirState.NOT_IN_MEMORY
2841
self._changes_aborted = False
2844
self._dirblocks = []
2845
self._id_index = None
2846
self._packed_stat_index = None
2847
self._end_of_header = None
2848
self._cutoff_time = None
2849
self._split_path_cache = {}
2851
def lock_read(self):
2852
"""Acquire a read lock on the dirstate."""
2853
if self._lock_token is not None:
2854
raise errors.LockContention(self._lock_token)
2855
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2856
# already in memory, we could read just the header and check for
2857
# any modification. If not modified, we can just leave things
2859
self._lock_token = lock.ReadLock(self._filename)
2860
self._lock_state = 'r'
2861
self._state_file = self._lock_token.f
2864
def lock_write(self):
2865
"""Acquire a write lock on the dirstate."""
2866
if self._lock_token is not None:
2867
raise errors.LockContention(self._lock_token)
2868
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2869
# already in memory, we could read just the header and check for
2870
# any modification. If not modified, we can just leave things
2872
self._lock_token = lock.WriteLock(self._filename)
2873
self._lock_state = 'w'
2874
self._state_file = self._lock_token.f
2878
"""Drop any locks held on the dirstate."""
2879
if self._lock_token is None:
2880
raise errors.LockNotHeld(self)
2881
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2882
# already in memory, we could read just the header and check for
2883
# any modification. If not modified, we can just leave things
2885
self._state_file = None
2886
self._lock_state = None
2887
self._lock_token.unlock()
2888
self._lock_token = None
2889
self._split_path_cache = {}
2891
def _requires_lock(self):
2892
"""Check that a lock is currently held by someone on the dirstate."""
2893
if not self._lock_token:
2894
raise errors.ObjectNotLocked(self)
2897
def py_update_entry(state, entry, abspath, stat_value,
2898
_stat_to_minikind=DirState._stat_to_minikind,
2899
_pack_stat=pack_stat):
2900
"""Update the entry based on what is actually on disk.
2902
This function only calculates the sha if it needs to - if the entry is
2903
uncachable, or clearly different to the first parent's entry, no sha
2904
is calculated, and None is returned.
2906
:param state: The dirstate this entry is in.
2907
:param entry: This is the dirblock entry for the file in question.
2908
:param abspath: The path on disk for this file.
2909
:param stat_value: The stat value done on the path.
2910
:return: None, or The sha1 hexdigest of the file (40 bytes) or link
2911
target of a symlink.
2914
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
2918
packed_stat = _pack_stat(stat_value)
2919
(saved_minikind, saved_link_or_sha1, saved_file_size,
2920
saved_executable, saved_packed_stat) = entry[1][0]
2922
if minikind == 'd' and saved_minikind == 't':
2924
if (minikind == saved_minikind
2925
and packed_stat == saved_packed_stat):
2926
# The stat hasn't changed since we saved, so we can re-use the
2931
# size should also be in packed_stat
2932
if saved_file_size == stat_value.st_size:
2933
return saved_link_or_sha1
2935
# If we have gotten this far, that means that we need to actually
2936
# process this entry.
2939
executable = state._is_executable(stat_value.st_mode,
2941
if state._cutoff_time is None:
2942
state._sha_cutoff_time()
2943
if (stat_value.st_mtime < state._cutoff_time
2944
and stat_value.st_ctime < state._cutoff_time
2945
and len(entry[1]) > 1
2946
and entry[1][1][0] != 'a'):
2947
# Could check for size changes for further optimised
2948
# avoidance of sha1's. However the most prominent case of
2949
# over-shaing is during initial add, which this catches.
2950
# Besides, if content filtering happens, size and sha
2951
# are calculated at the same time, so checking just the size
2952
# gains nothing w.r.t. performance.
2953
link_or_sha1 = state._sha1_file(abspath)
2954
entry[1][0] = ('f', link_or_sha1, stat_value.st_size,
2955
executable, packed_stat)
2957
entry[1][0] = ('f', '', stat_value.st_size,
2958
executable, DirState.NULLSTAT)
2959
elif minikind == 'd':
2961
entry[1][0] = ('d', '', 0, False, packed_stat)
2962
if saved_minikind != 'd':
2963
# This changed from something into a directory. Make sure we
2964
# have a directory block for it. This doesn't happen very
2965
# often, so this doesn't have to be super fast.
2966
block_index, entry_index, dir_present, file_present = \
2967
state._get_block_entry_index(entry[0][0], entry[0][1], 0)
2968
state._ensure_block(block_index, entry_index,
2969
osutils.pathjoin(entry[0][0], entry[0][1]))
2970
elif minikind == 'l':
2971
link_or_sha1 = state._read_link(abspath, saved_link_or_sha1)
2972
if state._cutoff_time is None:
2973
state._sha_cutoff_time()
2974
if (stat_value.st_mtime < state._cutoff_time
2975
and stat_value.st_ctime < state._cutoff_time):
2976
entry[1][0] = ('l', link_or_sha1, stat_value.st_size,
2979
entry[1][0] = ('l', '', stat_value.st_size,
2980
False, DirState.NULLSTAT)
2981
state._dirblock_state = DirState.IN_MEMORY_MODIFIED
2985
class ProcessEntryPython(object):
2987
__slots__ = ["old_dirname_to_file_id", "new_dirname_to_file_id", "uninteresting",
2988
"last_source_parent", "last_target_parent", "include_unchanged",
2989
"use_filesystem_for_exec", "utf8_decode", "searched_specific_files",
2990
"search_specific_files", "state", "source_index", "target_index",
2991
"want_unversioned", "tree"]
2993
def __init__(self, include_unchanged, use_filesystem_for_exec,
2994
search_specific_files, state, source_index, target_index,
2995
want_unversioned, tree):
2996
self.old_dirname_to_file_id = {}
2997
self.new_dirname_to_file_id = {}
2998
# Just a sentry, so that _process_entry can say that this
2999
# record is handled, but isn't interesting to process (unchanged)
3000
self.uninteresting = object()
3001
# Using a list so that we can access the values and change them in
3002
# nested scope. Each one is [path, file_id, entry]
3003
self.last_source_parent = [None, None]
3004
self.last_target_parent = [None, None]
3005
self.include_unchanged = include_unchanged
3006
self.use_filesystem_for_exec = use_filesystem_for_exec
3007
self.utf8_decode = cache_utf8._utf8_decode
3008
# for all search_indexs in each path at or under each element of
3009
# search_specific_files, if the detail is relocated: add the id, and add the
3010
# relocated path as one to search if its not searched already. If the
3011
# detail is not relocated, add the id.
3012
self.searched_specific_files = set()
3013
self.search_specific_files = search_specific_files
3015
self.source_index = source_index
3016
self.target_index = target_index
3017
self.want_unversioned = want_unversioned
3020
def _process_entry(self, entry, path_info, pathjoin=osutils.pathjoin):
3021
"""Compare an entry and real disk to generate delta information.
3023
:param path_info: top_relpath, basename, kind, lstat, abspath for
3024
the path of entry. If None, then the path is considered absent.
3025
(Perhaps we should pass in a concrete entry for this ?)
3026
Basename is returned as a utf8 string because we expect this
3027
tuple will be ignored, and don't want to take the time to
3029
:return: None if these don't match
3030
A tuple of information about the change, or
3031
the object 'uninteresting' if these match, but are
3032
basically identical.
3034
if self.source_index is None:
3035
source_details = DirState.NULL_PARENT_DETAILS
3037
source_details = entry[1][self.source_index]
3038
target_details = entry[1][self.target_index]
3039
target_minikind = target_details[0]
3040
if path_info is not None and target_minikind in 'fdlt':
3041
if not (self.target_index == 0):
3042
raise AssertionError()
3043
link_or_sha1 = update_entry(self.state, entry,
3044
abspath=path_info[4], stat_value=path_info[3])
3045
# The entry may have been modified by update_entry
3046
target_details = entry[1][self.target_index]
3047
target_minikind = target_details[0]
3050
file_id = entry[0][2]
3051
source_minikind = source_details[0]
3052
if source_minikind in 'fdltr' and target_minikind in 'fdlt':
3053
# claimed content in both: diff
3054
# r | fdlt | | add source to search, add id path move and perform
3055
# | | | diff check on source-target
3056
# r | fdlt | a | dangling file that was present in the basis.
3058
if source_minikind in 'r':
3059
# add the source to the search path to find any children it
3060
# has. TODO ? : only add if it is a container ?
3061
if not osutils.is_inside_any(self.searched_specific_files,
3063
self.search_specific_files.add(source_details[1])
3064
# generate the old path; this is needed for stating later
3066
old_path = source_details[1]
3067
old_dirname, old_basename = os.path.split(old_path)
3068
path = pathjoin(entry[0][0], entry[0][1])
3069
old_entry = self.state._get_entry(self.source_index,
3071
# update the source details variable to be the real
3073
if old_entry == (None, None):
3074
raise errors.CorruptDirstate(self.state._filename,
3075
"entry '%s/%s' is considered renamed from %r"
3076
" but source does not exist\n"
3077
"entry: %s" % (entry[0][0], entry[0][1], old_path, entry))
3078
source_details = old_entry[1][self.source_index]
3079
source_minikind = source_details[0]
3081
old_dirname = entry[0][0]
3082
old_basename = entry[0][1]
3083
old_path = path = None
3084
if path_info is None:
3085
# the file is missing on disk, show as removed.
3086
content_change = True
3090
# source and target are both versioned and disk file is present.
3091
target_kind = path_info[2]
3092
if target_kind == 'directory':
3094
old_path = path = pathjoin(old_dirname, old_basename)
3095
self.new_dirname_to_file_id[path] = file_id
3096
if source_minikind != 'd':
3097
content_change = True
3099
# directories have no fingerprint
3100
content_change = False
3102
elif target_kind == 'file':
3103
if source_minikind != 'f':
3104
content_change = True
3106
# Check the sha. We can't just rely on the size as
3107
# content filtering may mean differ sizes actually
3108
# map to the same content
3109
if link_or_sha1 is None:
3111
statvalue, link_or_sha1 = \
3112
self.state._sha1_provider.stat_and_sha1(
3114
self.state._observed_sha1(entry, link_or_sha1,
3116
content_change = (link_or_sha1 != source_details[1])
3117
# Target details is updated at update_entry time
3118
if self.use_filesystem_for_exec:
3119
# We don't need S_ISREG here, because we are sure
3120
# we are dealing with a file.
3121
target_exec = bool(stat.S_IEXEC & path_info[3].st_mode)
3123
target_exec = target_details[3]
3124
elif target_kind == 'symlink':
3125
if source_minikind != 'l':
3126
content_change = True
3128
content_change = (link_or_sha1 != source_details[1])
3130
elif target_kind == 'tree-reference':
3131
if source_minikind != 't':
3132
content_change = True
3134
content_change = False
3137
raise Exception, "unknown kind %s" % path_info[2]
3138
if source_minikind == 'd':
3140
old_path = path = pathjoin(old_dirname, old_basename)
3141
self.old_dirname_to_file_id[old_path] = file_id
3142
# parent id is the entry for the path in the target tree
3143
if old_dirname == self.last_source_parent[0]:
3144
source_parent_id = self.last_source_parent[1]
3147
source_parent_id = self.old_dirname_to_file_id[old_dirname]
3149
source_parent_entry = self.state._get_entry(self.source_index,
3150
path_utf8=old_dirname)
3151
source_parent_id = source_parent_entry[0][2]
3152
if source_parent_id == entry[0][2]:
3153
# This is the root, so the parent is None
3154
source_parent_id = None
3156
self.last_source_parent[0] = old_dirname
3157
self.last_source_parent[1] = source_parent_id
3158
new_dirname = entry[0][0]
3159
if new_dirname == self.last_target_parent[0]:
3160
target_parent_id = self.last_target_parent[1]
3163
target_parent_id = self.new_dirname_to_file_id[new_dirname]
3165
# TODO: We don't always need to do the lookup, because the
3166
# parent entry will be the same as the source entry.
3167
target_parent_entry = self.state._get_entry(self.target_index,
3168
path_utf8=new_dirname)
3169
if target_parent_entry == (None, None):
3170
raise AssertionError(
3171
"Could not find target parent in wt: %s\nparent of: %s"
3172
% (new_dirname, entry))
3173
target_parent_id = target_parent_entry[0][2]
3174
if target_parent_id == entry[0][2]:
3175
# This is the root, so the parent is None
3176
target_parent_id = None
3178
self.last_target_parent[0] = new_dirname
3179
self.last_target_parent[1] = target_parent_id
3181
source_exec = source_details[3]
3182
if (self.include_unchanged
3184
or source_parent_id != target_parent_id
3185
or old_basename != entry[0][1]
3186
or source_exec != target_exec
3188
if old_path is None:
3189
old_path = path = pathjoin(old_dirname, old_basename)
3190
old_path_u = self.utf8_decode(old_path)[0]
3193
old_path_u = self.utf8_decode(old_path)[0]
3194
if old_path == path:
3197
path_u = self.utf8_decode(path)[0]
3198
source_kind = DirState._minikind_to_kind[source_minikind]
3199
return (entry[0][2],
3200
(old_path_u, path_u),
3203
(source_parent_id, target_parent_id),
3204
(self.utf8_decode(old_basename)[0], self.utf8_decode(entry[0][1])[0]),
3205
(source_kind, target_kind),
3206
(source_exec, target_exec))
3208
return self.uninteresting
3209
elif source_minikind in 'a' and target_minikind in 'fdlt':
3210
# looks like a new file
3211
path = pathjoin(entry[0][0], entry[0][1])
3212
# parent id is the entry for the path in the target tree
3213
# TODO: these are the same for an entire directory: cache em.
3214
parent_id = self.state._get_entry(self.target_index,
3215
path_utf8=entry[0][0])[0][2]
3216
if parent_id == entry[0][2]:
3218
if path_info is not None:
3220
if self.use_filesystem_for_exec:
3221
# We need S_ISREG here, because we aren't sure if this
3224
stat.S_ISREG(path_info[3].st_mode)
3225
and stat.S_IEXEC & path_info[3].st_mode)
3227
target_exec = target_details[3]
3228
return (entry[0][2],
3229
(None, self.utf8_decode(path)[0]),
3233
(None, self.utf8_decode(entry[0][1])[0]),
3234
(None, path_info[2]),
3235
(None, target_exec))
3237
# Its a missing file, report it as such.
3238
return (entry[0][2],
3239
(None, self.utf8_decode(path)[0]),
3243
(None, self.utf8_decode(entry[0][1])[0]),
3246
elif source_minikind in 'fdlt' and target_minikind in 'a':
3247
# unversioned, possibly, or possibly not deleted: we dont care.
3248
# if its still on disk, *and* theres no other entry at this
3249
# path [we dont know this in this routine at the moment -
3250
# perhaps we should change this - then it would be an unknown.
3251
old_path = pathjoin(entry[0][0], entry[0][1])
3252
# parent id is the entry for the path in the target tree
3253
parent_id = self.state._get_entry(self.source_index, path_utf8=entry[0][0])[0][2]
3254
if parent_id == entry[0][2]:
3256
return (entry[0][2],
3257
(self.utf8_decode(old_path)[0], None),
3261
(self.utf8_decode(entry[0][1])[0], None),
3262
(DirState._minikind_to_kind[source_minikind], None),
3263
(source_details[3], None))
3264
elif source_minikind in 'fdlt' and target_minikind in 'r':
3265
# a rename; could be a true rename, or a rename inherited from
3266
# a renamed parent. TODO: handle this efficiently. Its not
3267
# common case to rename dirs though, so a correct but slow
3268
# implementation will do.
3269
if not osutils.is_inside_any(self.searched_specific_files, target_details[1]):
3270
self.search_specific_files.add(target_details[1])
3271
elif source_minikind in 'ra' and target_minikind in 'ra':
3272
# neither of the selected trees contain this file,
3273
# so skip over it. This is not currently directly tested, but
3274
# is indirectly via test_too_much.TestCommands.test_conflicts.
3277
raise AssertionError("don't know how to compare "
3278
"source_minikind=%r, target_minikind=%r"
3279
% (source_minikind, target_minikind))
3280
## import pdb;pdb.set_trace()
3286
def iter_changes(self):
3287
"""Iterate over the changes."""
3288
utf8_decode = cache_utf8._utf8_decode
3289
_cmp_by_dirs = cmp_by_dirs
3290
_process_entry = self._process_entry
3291
uninteresting = self.uninteresting
3292
search_specific_files = self.search_specific_files
3293
searched_specific_files = self.searched_specific_files
3294
splitpath = osutils.splitpath
3296
# compare source_index and target_index at or under each element of search_specific_files.
3297
# follow the following comparison table. Note that we only want to do diff operations when
3298
# the target is fdl because thats when the walkdirs logic will have exposed the pathinfo
3302
# Source | Target | disk | action
3303
# r | fdlt | | add source to search, add id path move and perform
3304
# | | | diff check on source-target
3305
# r | fdlt | a | dangling file that was present in the basis.
3307
# r | a | | add source to search
3309
# r | r | | this path is present in a non-examined tree, skip.
3310
# r | r | a | this path is present in a non-examined tree, skip.
3311
# a | fdlt | | add new id
3312
# a | fdlt | a | dangling locally added file, skip
3313
# a | a | | not present in either tree, skip
3314
# a | a | a | not present in any tree, skip
3315
# a | r | | not present in either tree at this path, skip as it
3316
# | | | may not be selected by the users list of paths.
3317
# a | r | a | not present in either tree at this path, skip as it
3318
# | | | may not be selected by the users list of paths.
3319
# fdlt | fdlt | | content in both: diff them
3320
# fdlt | fdlt | a | deleted locally, but not unversioned - show as deleted ?
3321
# fdlt | a | | unversioned: output deleted id for now
3322
# fdlt | a | a | unversioned and deleted: output deleted id
3323
# fdlt | r | | relocated in this tree, so add target to search.
3324
# | | | Dont diff, we will see an r,fd; pair when we reach
3325
# | | | this id at the other path.
3326
# fdlt | r | a | relocated in this tree, so add target to search.
3327
# | | | Dont diff, we will see an r,fd; pair when we reach
3328
# | | | this id at the other path.
3330
# TODO: jam 20070516 - Avoid the _get_entry lookup overhead by
3331
# keeping a cache of directories that we have seen.
3333
while search_specific_files:
3334
# TODO: the pending list should be lexically sorted? the
3335
# interface doesn't require it.
3336
current_root = search_specific_files.pop()
3337
current_root_unicode = current_root.decode('utf8')
3338
searched_specific_files.add(current_root)
3339
# process the entries for this containing directory: the rest will be
3340
# found by their parents recursively.
3341
root_entries = self.state._entries_for_path(current_root)
3342
root_abspath = self.tree.abspath(current_root_unicode)
3344
root_stat = os.lstat(root_abspath)
3346
if e.errno == errno.ENOENT:
3347
# the path does not exist: let _process_entry know that.
3348
root_dir_info = None
3350
# some other random error: hand it up.
3353
root_dir_info = ('', current_root,
3354
osutils.file_kind_from_stat_mode(root_stat.st_mode), root_stat,
3356
if root_dir_info[2] == 'directory':
3357
if self.tree._directory_is_tree_reference(
3358
current_root.decode('utf8')):
3359
root_dir_info = root_dir_info[:2] + \
3360
('tree-reference',) + root_dir_info[3:]
3362
if not root_entries and not root_dir_info:
3363
# this specified path is not present at all, skip it.
3365
path_handled = False
3366
for entry in root_entries:
3367
result = _process_entry(entry, root_dir_info)
3368
if result is not None:
3370
if result is not uninteresting:
3372
if self.want_unversioned and not path_handled and root_dir_info:
3373
new_executable = bool(
3374
stat.S_ISREG(root_dir_info[3].st_mode)
3375
and stat.S_IEXEC & root_dir_info[3].st_mode)
3377
(None, current_root_unicode),
3381
(None, splitpath(current_root_unicode)[-1]),
3382
(None, root_dir_info[2]),
3383
(None, new_executable)
3385
initial_key = (current_root, '', '')
3386
block_index, _ = self.state._find_block_index_from_key(initial_key)
3387
if block_index == 0:
3388
# we have processed the total root already, but because the
3389
# initial key matched it we should skip it here.
3391
if root_dir_info and root_dir_info[2] == 'tree-reference':
3392
current_dir_info = None
3394
dir_iterator = osutils._walkdirs_utf8(root_abspath, prefix=current_root)
3396
current_dir_info = dir_iterator.next()
3398
# on win32, python2.4 has e.errno == ERROR_DIRECTORY, but
3399
# python 2.5 has e.errno == EINVAL,
3400
# and e.winerror == ERROR_DIRECTORY
3401
e_winerror = getattr(e, 'winerror', None)
3402
win_errors = (ERROR_DIRECTORY, ERROR_PATH_NOT_FOUND)
3403
# there may be directories in the inventory even though
3404
# this path is not a file on disk: so mark it as end of
3406
if e.errno in (errno.ENOENT, errno.ENOTDIR, errno.EINVAL):
3407
current_dir_info = None
3408
elif (sys.platform == 'win32'
3409
and (e.errno in win_errors
3410
or e_winerror in win_errors)):
3411
current_dir_info = None
3415
if current_dir_info[0][0] == '':
3416
# remove .bzr from iteration
3417
bzr_index = bisect.bisect_left(current_dir_info[1], ('.bzr',))
3418
if current_dir_info[1][bzr_index][0] != '.bzr':
3419
raise AssertionError()
3420
del current_dir_info[1][bzr_index]
3421
# walk until both the directory listing and the versioned metadata
3423
if (block_index < len(self.state._dirblocks) and
3424
osutils.is_inside(current_root, self.state._dirblocks[block_index][0])):
3425
current_block = self.state._dirblocks[block_index]
3427
current_block = None
3428
while (current_dir_info is not None or
3429
current_block is not None):
3430
if (current_dir_info and current_block
3431
and current_dir_info[0][0] != current_block[0]):
3432
if _cmp_by_dirs(current_dir_info[0][0], current_block[0]) < 0:
3433
# filesystem data refers to paths not covered by the dirblock.
3434
# this has two possibilities:
3435
# A) it is versioned but empty, so there is no block for it
3436
# B) it is not versioned.
3438
# if (A) then we need to recurse into it to check for
3439
# new unknown files or directories.
3440
# if (B) then we should ignore it, because we don't
3441
# recurse into unknown directories.
3443
while path_index < len(current_dir_info[1]):
3444
current_path_info = current_dir_info[1][path_index]
3445
if self.want_unversioned:
3446
if current_path_info[2] == 'directory':
3447
if self.tree._directory_is_tree_reference(
3448
current_path_info[0].decode('utf8')):
3449
current_path_info = current_path_info[:2] + \
3450
('tree-reference',) + current_path_info[3:]
3451
new_executable = bool(
3452
stat.S_ISREG(current_path_info[3].st_mode)
3453
and stat.S_IEXEC & current_path_info[3].st_mode)
3455
(None, utf8_decode(current_path_info[0])[0]),
3459
(None, utf8_decode(current_path_info[1])[0]),
3460
(None, current_path_info[2]),
3461
(None, new_executable))
3462
# dont descend into this unversioned path if it is
3464
if current_path_info[2] in ('directory',
3466
del current_dir_info[1][path_index]
3470
# This dir info has been handled, go to the next
3472
current_dir_info = dir_iterator.next()
3473
except StopIteration:
3474
current_dir_info = None
3476
# We have a dirblock entry for this location, but there
3477
# is no filesystem path for this. This is most likely
3478
# because a directory was removed from the disk.
3479
# We don't have to report the missing directory,
3480
# because that should have already been handled, but we
3481
# need to handle all of the files that are contained
3483
for current_entry in current_block[1]:
3484
# entry referring to file not present on disk.
3485
# advance the entry only, after processing.
3486
result = _process_entry(current_entry, None)
3487
if result is not None:
3488
if result is not uninteresting:
3491
if (block_index < len(self.state._dirblocks) and
3492
osutils.is_inside(current_root,
3493
self.state._dirblocks[block_index][0])):
3494
current_block = self.state._dirblocks[block_index]
3496
current_block = None
3499
if current_block and entry_index < len(current_block[1]):
3500
current_entry = current_block[1][entry_index]
3502
current_entry = None
3503
advance_entry = True
3505
if current_dir_info and path_index < len(current_dir_info[1]):
3506
current_path_info = current_dir_info[1][path_index]
3507
if current_path_info[2] == 'directory':
3508
if self.tree._directory_is_tree_reference(
3509
current_path_info[0].decode('utf8')):
3510
current_path_info = current_path_info[:2] + \
3511
('tree-reference',) + current_path_info[3:]
3513
current_path_info = None
3515
path_handled = False
3516
while (current_entry is not None or
3517
current_path_info is not None):
3518
if current_entry is None:
3519
# the check for path_handled when the path is advanced
3520
# will yield this path if needed.
3522
elif current_path_info is None:
3523
# no path is fine: the per entry code will handle it.
3524
result = _process_entry(current_entry, current_path_info)
3525
if result is not None:
3526
if result is not uninteresting:
3528
elif (current_entry[0][1] != current_path_info[1]
3529
or current_entry[1][self.target_index][0] in 'ar'):
3530
# The current path on disk doesn't match the dirblock
3531
# record. Either the dirblock is marked as absent, or
3532
# the file on disk is not present at all in the
3533
# dirblock. Either way, report about the dirblock
3534
# entry, and let other code handle the filesystem one.
3536
# Compare the basename for these files to determine
3538
if current_path_info[1] < current_entry[0][1]:
3539
# extra file on disk: pass for now, but only
3540
# increment the path, not the entry
3541
advance_entry = False
3543
# entry referring to file not present on disk.
3544
# advance the entry only, after processing.
3545
result = _process_entry(current_entry, None)
3546
if result is not None:
3547
if result is not uninteresting:
3549
advance_path = False
3551
result = _process_entry(current_entry, current_path_info)
3552
if result is not None:
3554
if result is not uninteresting:
3556
if advance_entry and current_entry is not None:
3558
if entry_index < len(current_block[1]):
3559
current_entry = current_block[1][entry_index]
3561
current_entry = None
3563
advance_entry = True # reset the advance flaga
3564
if advance_path and current_path_info is not None:
3565
if not path_handled:
3566
# unversioned in all regards
3567
if self.want_unversioned:
3568
new_executable = bool(
3569
stat.S_ISREG(current_path_info[3].st_mode)
3570
and stat.S_IEXEC & current_path_info[3].st_mode)
3572
relpath_unicode = utf8_decode(current_path_info[0])[0]
3573
except UnicodeDecodeError:
3574
raise errors.BadFilenameEncoding(
3575
current_path_info[0], osutils._fs_enc)
3577
(None, relpath_unicode),
3581
(None, utf8_decode(current_path_info[1])[0]),
3582
(None, current_path_info[2]),
3583
(None, new_executable))
3584
# dont descend into this unversioned path if it is
3586
if current_path_info[2] in ('directory'):
3587
del current_dir_info[1][path_index]
3589
# dont descend the disk iterator into any tree
3591
if current_path_info[2] == 'tree-reference':
3592
del current_dir_info[1][path_index]
3595
if path_index < len(current_dir_info[1]):
3596
current_path_info = current_dir_info[1][path_index]
3597
if current_path_info[2] == 'directory':
3598
if self.tree._directory_is_tree_reference(
3599
current_path_info[0].decode('utf8')):
3600
current_path_info = current_path_info[:2] + \
3601
('tree-reference',) + current_path_info[3:]
3603
current_path_info = None
3604
path_handled = False
3606
advance_path = True # reset the advance flagg.
3607
if current_block is not None:
3609
if (block_index < len(self.state._dirblocks) and
3610
osutils.is_inside(current_root, self.state._dirblocks[block_index][0])):
3611
current_block = self.state._dirblocks[block_index]
3613
current_block = None
3614
if current_dir_info is not None:
3616
current_dir_info = dir_iterator.next()
3617
except StopIteration:
3618
current_dir_info = None
3621
# Try to load the compiled form if possible
3623
from bzrlib._dirstate_helpers_pyx import (
3629
ProcessEntryC as _process_entry,
3630
update_entry as update_entry,
3633
from bzrlib._dirstate_helpers_py import (
3640
# FIXME: It would be nice to be able to track moved lines so that the
3641
# corresponding python code can be moved to the _dirstate_helpers_py
3642
# module. I don't want to break the history for this important piece of
3643
# code so I left the code here -- vila 20090622
3644
update_entry = py_update_entry
3645
_process_entry = ProcessEntryPython