1
# Copyright (C) 2006, 2007, 2008 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 3", NL;
33
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", WHOLE_NUMBER, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but its a file. The fingerprint is a
87
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
89
't' is a reference to a nested subtree; the fingerprint is the referenced
94
The entries on disk and in memory are ordered according to the following keys:
96
directory, as a list of components
100
--- Format 1 had the following different definition: ---
101
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
102
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
104
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
105
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
108
PARENT ROW's are emitted for every parent that is not in the ghosts details
109
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
110
each row will have a PARENT ROW for foo and baz, but not for bar.
113
In any tree, a kind of 'moved' indicates that the fingerprint field
114
(which we treat as opaque data specific to the 'kind' anyway) has the
115
details for the id of this row in that tree.
117
I'm strongly tempted to add a id->path index as well, but I think that
118
where we need id->path mapping; we also usually read the whole file, so
119
I'm going to skip that for the moment, as we have the ability to locate
120
via bisect any path in any tree, and if we lookup things by path, we can
121
accumulate an id->path mapping as we go, which will tend to match what we
124
I plan to implement this asap, so please speak up now to alter/tweak the
125
design - and once we stabilise on this, I'll update the wiki page for
128
The rationale for all this is that we want fast operations for the
129
common case (diff/status/commit/merge on all files) and extremely fast
130
operations for the less common but still occurs a lot status/diff/commit
131
on specific files). Operations on specific files involve a scan for all
132
the children of a path, *in every involved tree*, which the current
133
format did not accommodate.
137
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
138
2) fall back current object model as needed.
139
3) scale usably to the largest trees known today - say 50K entries. (mozilla
140
is an example of this)
144
Eventually reuse dirstate objects across locks IFF the dirstate file has not
145
been modified, but will require that we flush/ignore cached stat-hit data
146
because we won't want to restat all files on disk just because a lock was
147
acquired, yet we cannot trust the data after the previous lock was released.
149
Memory representation:
150
vector of all directories, and vector of the childen ?
152
root_entrie = (direntry for root, [parent_direntries_for_root]),
154
('', ['data for achild', 'data for bchild', 'data for cchild'])
155
('dir', ['achild', 'cchild', 'echild'])
157
- single bisect to find N subtrees from a path spec
158
- in-order for serialisation - this is 'dirblock' grouping.
159
- insertion of a file '/a' affects only the '/' child-vector, that is, to
160
insert 10K elements from scratch does not generates O(N^2) memoves of a
161
single vector, rather each individual, which tends to be limited to a
162
manageable number. Will scale badly on trees with 10K entries in a
163
single directory. compare with Inventory.InventoryDirectory which has
164
a dictionary for the children. No bisect capability, can only probe for
165
exact matches, or grab all elements and sort.
166
- What's the risk of error here? Once we have the base format being processed
167
we should have a net win regardless of optimality. So we are going to
168
go with what seems reasonable.
171
Maybe we should do a test profile of the core structure - 10K simulated
172
searches/lookups/etc?
174
Objects for each row?
175
The lifetime of Dirstate objects is current per lock, but see above for
176
possible extensions. The lifetime of a row from a dirstate is expected to be
177
very short in the optimistic case: which we are optimising for. For instance,
178
subtree status will determine from analysis of the disk data what rows need to
179
be examined at all, and will be able to determine from a single row whether
180
that file has altered or not, so we are aiming to process tens of thousands of
181
entries each second within the dirstate context, before exposing anything to
182
the larger codebase. This suggests we want the time for a single file
183
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
184
processed, and to scale to 100 thousand we'll another order of magnitude to do
185
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
186
the file on disk, and then immediately discard, the overhead of object creation
187
becomes a significant cost.
189
Figures: Creating a tuple from from 3 elements was profiled at 0.0625
190
microseconds, whereas creating a object which is subclassed from tuple was
191
0.500 microseconds, and creating an object with 3 elements and slots was 3
192
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
193
down to 10 microseconds for the total processing - having 33% of that be object
194
creation is a huge overhead. There is a potential cost in using tuples within
195
each row which is that the conditional code to do comparisons may be slower
196
than method invocation, but method invocation is known to be slow due to stack
197
frame creation, so avoiding methods in these tight inner loops in unfortunately
198
desirable. We can consider a pyrex version of this with objects in future if
207
from stat import S_IEXEC
225
# This is the Windows equivalent of ENOTDIR
226
# It is defined in pywin32.winerror, but we don't want a strong dependency for
227
# just an error code.
228
ERROR_PATH_NOT_FOUND = 3
229
ERROR_DIRECTORY = 267
232
if not getattr(struct, '_compile', None):
233
# Cannot pre-compile the dirstate pack_stat
234
def pack_stat(st, _encode=binascii.b2a_base64, _pack=struct.pack):
235
"""Convert stat values into a packed representation."""
236
return _encode(_pack('>LLLLLL', st.st_size, int(st.st_mtime),
237
int(st.st_ctime), st.st_dev, st.st_ino & 0xFFFFFFFF,
240
# compile the struct compiler we need, so as to only do it once
241
from _struct import Struct
242
_compiled_pack = Struct('>LLLLLL').pack
243
def pack_stat(st, _encode=binascii.b2a_base64, _pack=_compiled_pack):
244
"""Convert stat values into a packed representation."""
245
# jam 20060614 it isn't really worth removing more entries if we
246
# are going to leave it in packed form.
247
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
248
# With all entries, filesize is 5.9M and read time is maybe 280ms
249
# well within the noise margin
251
# base64 encoding always adds a final newline, so strip it off
252
# The current version
253
return _encode(_pack(st.st_size, int(st.st_mtime), int(st.st_ctime),
254
st.st_dev, st.st_ino & 0xFFFFFFFF, st.st_mode))[:-1]
255
# This is 0.060s / 1.520s faster by not encoding as much information
256
# return _encode(_pack('>LL', int(st.st_mtime), st.st_mode))[:-1]
257
# This is not strictly faster than _encode(_pack())[:-1]
258
# return '%X.%X.%X.%X.%X.%X' % (
259
# st.st_size, int(st.st_mtime), int(st.st_ctime),
260
# st.st_dev, st.st_ino, st.st_mode)
261
# Similar to the _encode(_pack('>LL'))
262
# return '%X.%X' % (int(st.st_mtime), st.st_mode)
265
class DirState(object):
266
"""Record directory and metadata state for fast access.
268
A dirstate is a specialised data structure for managing local working
269
tree state information. Its not yet well defined whether it is platform
270
specific, and if it is how we detect/parameterize that.
272
Dirstates use the usual lock_write, lock_read and unlock mechanisms.
273
Unlike most bzr disk formats, DirStates must be locked for reading, using
274
lock_read. (This is an os file lock internally.) This is necessary
275
because the file can be rewritten in place.
277
DirStates must be explicitly written with save() to commit changes; just
278
unlocking them does not write the changes to disk.
281
_kind_to_minikind = {
287
'tree-reference': 't',
289
_minikind_to_kind = {
295
't': 'tree-reference',
297
_stat_to_minikind = {
302
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
303
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
306
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
307
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
308
# that we are sure a single record will always fit.
309
BISECT_PAGE_SIZE = 4096
312
IN_MEMORY_UNMODIFIED = 1
313
IN_MEMORY_MODIFIED = 2
315
# A pack_stat (the x's) that is just noise and will never match the output
318
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
320
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
321
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
323
def __init__(self, path):
324
"""Create a DirState object.
326
:param path: The path at which the dirstate file on disk should live.
328
# _header_state and _dirblock_state represent the current state
329
# of the dirstate metadata and the per-row data respectiely.
330
# NOT_IN_MEMORY indicates that no data is in memory
331
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
332
# is the same as is on disk
333
# IN_MEMORY_MODIFIED indicates that we have a modified version
334
# of what is on disk.
335
# In future we will add more granularity, for instance _dirblock_state
336
# will probably support partially-in-memory as a separate variable,
337
# allowing for partially-in-memory unmodified and partially-in-memory
339
self._header_state = DirState.NOT_IN_MEMORY
340
self._dirblock_state = DirState.NOT_IN_MEMORY
341
# If true, an error has been detected while updating the dirstate, and
342
# for safety we're not going to commit to disk.
343
self._changes_aborted = False
347
self._state_file = None
348
self._filename = path
349
self._lock_token = None
350
self._lock_state = None
351
self._id_index = None
352
# a map from packed_stat to sha's.
353
self._packed_stat_index = None
354
self._end_of_header = None
355
self._cutoff_time = None
356
self._split_path_cache = {}
357
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
358
if 'hashcache' in debug.debug_flags:
359
self._sha1_file = self._sha1_file_and_mutter
361
self._sha1_file = osutils.sha_file_by_name
362
# These two attributes provide a simple cache for lookups into the
363
# dirstate in-memory vectors. By probing respectively for the last
364
# block, and for the next entry, we save nearly 2 bisections per path
366
self._last_block_index = None
367
self._last_entry_index = None
371
(self.__class__.__name__, self._filename)
373
def add(self, path, file_id, kind, stat, fingerprint):
374
"""Add a path to be tracked.
376
:param path: The path within the dirstate - '' is the root, 'foo' is the
377
path foo within the root, 'foo/bar' is the path bar within foo
379
:param file_id: The file id of the path being added.
380
:param kind: The kind of the path, as a string like 'file',
382
:param stat: The output of os.lstat for the path.
383
:param fingerprint: The sha value of the file,
384
or the target of a symlink,
385
or the referenced revision id for tree-references,
386
or '' for directories.
389
# find the block its in.
390
# find the location in the block.
391
# check its not there
393
#------- copied from inventory.ensure_normalized_name - keep synced.
394
# --- normalized_filename wants a unicode basename only, so get one.
395
dirname, basename = osutils.split(path)
396
# we dont import normalized_filename directly because we want to be
397
# able to change the implementation at runtime for tests.
398
norm_name, can_access = osutils.normalized_filename(basename)
399
if norm_name != basename:
403
raise errors.InvalidNormalization(path)
404
# you should never have files called . or ..; just add the directory
405
# in the parent, or according to the special treatment for the root
406
if basename == '.' or basename == '..':
407
raise errors.InvalidEntryName(path)
408
# now that we've normalised, we need the correct utf8 path and
409
# dirname and basename elements. This single encode and split should be
410
# faster than three separate encodes.
411
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
412
dirname, basename = osutils.split(utf8path)
413
# uses __class__ for speed; the check is needed for safety
414
if file_id.__class__ is not str:
415
raise AssertionError(
416
"must be a utf8 file_id not %s" % (type(file_id), ))
417
# Make sure the file_id does not exist in this tree
419
file_id_entry = self._get_entry(0, fileid_utf8=file_id, include_deleted=True)
420
if file_id_entry != (None, None):
421
if file_id_entry[1][0][0] == 'a':
422
if file_id_entry[0] != (dirname, basename, file_id):
423
# set the old name's current operation to rename
424
self.update_minimal(file_id_entry[0],
430
rename_from = file_id_entry[0][0:2]
432
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
433
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
434
info = '%s:%s' % (kind, path)
435
raise errors.DuplicateFileId(file_id, info)
436
first_key = (dirname, basename, '')
437
block_index, present = self._find_block_index_from_key(first_key)
439
# check the path is not in the tree
440
block = self._dirblocks[block_index][1]
441
entry_index, _ = self._find_entry_index(first_key, block)
442
while (entry_index < len(block) and
443
block[entry_index][0][0:2] == first_key[0:2]):
444
if block[entry_index][1][0][0] not in 'ar':
445
# this path is in the dirstate in the current tree.
446
raise Exception, "adding already added path!"
449
# The block where we want to put the file is not present. But it
450
# might be because the directory was empty, or not loaded yet. Look
451
# for a parent entry, if not found, raise NotVersionedError
452
parent_dir, parent_base = osutils.split(dirname)
453
parent_block_idx, parent_entry_idx, _, parent_present = \
454
self._get_block_entry_index(parent_dir, parent_base, 0)
455
if not parent_present:
456
raise errors.NotVersionedError(path, str(self))
457
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
458
block = self._dirblocks[block_index][1]
459
entry_key = (dirname, basename, file_id)
462
packed_stat = DirState.NULLSTAT
465
packed_stat = pack_stat(stat)
466
parent_info = self._empty_parent_info()
467
minikind = DirState._kind_to_minikind[kind]
468
if rename_from is not None:
470
old_path_utf8 = '%s/%s' % rename_from
472
old_path_utf8 = rename_from[1]
473
parent_info[0] = ('r', old_path_utf8, 0, False, '')
475
entry_data = entry_key, [
476
(minikind, fingerprint, size, False, packed_stat),
478
elif kind == 'directory':
479
entry_data = entry_key, [
480
(minikind, '', 0, False, packed_stat),
482
elif kind == 'symlink':
483
entry_data = entry_key, [
484
(minikind, fingerprint, size, False, packed_stat),
486
elif kind == 'tree-reference':
487
entry_data = entry_key, [
488
(minikind, fingerprint, 0, False, packed_stat),
491
raise errors.BzrError('unknown kind %r' % kind)
492
entry_index, present = self._find_entry_index(entry_key, block)
494
block.insert(entry_index, entry_data)
496
if block[entry_index][1][0][0] != 'a':
497
raise AssertionError(" %r(%r) already added" % (basename, file_id))
498
block[entry_index][1][0] = entry_data[1][0]
500
if kind == 'directory':
501
# insert a new dirblock
502
self._ensure_block(block_index, entry_index, utf8path)
503
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
505
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
507
def _bisect(self, paths):
508
"""Bisect through the disk structure for specific rows.
510
:param paths: A list of paths to find
511
:return: A dict mapping path => entries for found entries. Missing
512
entries will not be in the map.
513
The list is not sorted, and entries will be populated
514
based on when they were read.
516
self._requires_lock()
517
# We need the file pointer to be right after the initial header block
518
self._read_header_if_needed()
519
# If _dirblock_state was in memory, we should just return info from
520
# there, this function is only meant to handle when we want to read
522
if self._dirblock_state != DirState.NOT_IN_MEMORY:
523
raise AssertionError("bad dirblock state %r" % self._dirblock_state)
525
# The disk representation is generally info + '\0\n\0' at the end. But
526
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
527
# Because it means we can sync on the '\n'
528
state_file = self._state_file
529
file_size = os.fstat(state_file.fileno()).st_size
530
# We end up with 2 extra fields, we should have a trailing '\n' to
531
# ensure that we read the whole record, and we should have a precursur
532
# '' which ensures that we start after the previous '\n'
533
entry_field_count = self._fields_per_entry() + 1
535
low = self._end_of_header
536
high = file_size - 1 # Ignore the final '\0'
537
# Map from (dir, name) => entry
540
# Avoid infinite seeking
541
max_count = 30*len(paths)
543
# pending is a list of places to look.
544
# each entry is a tuple of low, high, dir_names
545
# low -> the first byte offset to read (inclusive)
546
# high -> the last byte offset (inclusive)
547
# dir_names -> The list of (dir, name) pairs that should be found in
548
# the [low, high] range
549
pending = [(low, high, paths)]
551
page_size = self._bisect_page_size
553
fields_to_entry = self._get_fields_to_entry()
556
low, high, cur_files = pending.pop()
558
if not cur_files or low >= high:
563
if count > max_count:
564
raise errors.BzrError('Too many seeks, most likely a bug.')
566
mid = max(low, (low+high-page_size)/2)
569
# limit the read size, so we don't end up reading data that we have
571
read_size = min(page_size, (high-mid)+1)
572
block = state_file.read(read_size)
575
entries = block.split('\n')
578
# We didn't find a '\n', so we cannot have found any records.
579
# So put this range back and try again. But we know we have to
580
# increase the page size, because a single read did not contain
581
# a record break (so records must be larger than page_size)
583
pending.append((low, high, cur_files))
586
# Check the first and last entries, in case they are partial, or if
587
# we don't care about the rest of this page
589
first_fields = entries[0].split('\0')
590
if len(first_fields) < entry_field_count:
591
# We didn't get the complete first entry
592
# so move start, and grab the next, which
593
# should be a full entry
594
start += len(entries[0])+1
595
first_fields = entries[1].split('\0')
598
if len(first_fields) <= 2:
599
# We didn't even get a filename here... what do we do?
600
# Try a large page size and repeat this query
602
pending.append((low, high, cur_files))
605
# Find what entries we are looking for, which occur before and
606
# after this first record.
609
first_path = first_fields[1] + '/' + first_fields[2]
611
first_path = first_fields[2]
612
first_loc = _bisect_path_left(cur_files, first_path)
614
# These exist before the current location
615
pre = cur_files[:first_loc]
616
# These occur after the current location, which may be in the
617
# data we read, or might be after the last entry
618
post = cur_files[first_loc:]
620
if post and len(first_fields) >= entry_field_count:
621
# We have files after the first entry
623
# Parse the last entry
624
last_entry_num = len(entries)-1
625
last_fields = entries[last_entry_num].split('\0')
626
if len(last_fields) < entry_field_count:
627
# The very last hunk was not complete,
628
# read the previous hunk
629
after = mid + len(block) - len(entries[-1])
631
last_fields = entries[last_entry_num].split('\0')
633
after = mid + len(block)
636
last_path = last_fields[1] + '/' + last_fields[2]
638
last_path = last_fields[2]
639
last_loc = _bisect_path_right(post, last_path)
641
middle_files = post[:last_loc]
642
post = post[last_loc:]
645
# We have files that should occur in this block
646
# (>= first, <= last)
647
# Either we will find them here, or we can mark them as
650
if middle_files[0] == first_path:
651
# We might need to go before this location
652
pre.append(first_path)
653
if middle_files[-1] == last_path:
654
post.insert(0, last_path)
656
# Find out what paths we have
657
paths = {first_path:[first_fields]}
658
# last_path might == first_path so we need to be
659
# careful if we should append rather than overwrite
660
if last_entry_num != first_entry_num:
661
paths.setdefault(last_path, []).append(last_fields)
662
for num in xrange(first_entry_num+1, last_entry_num):
663
# TODO: jam 20070223 We are already splitting here, so
664
# shouldn't we just split the whole thing rather
665
# than doing the split again in add_one_record?
666
fields = entries[num].split('\0')
668
path = fields[1] + '/' + fields[2]
671
paths.setdefault(path, []).append(fields)
673
for path in middle_files:
674
for fields in paths.get(path, []):
675
# offset by 1 because of the opening '\0'
676
# consider changing fields_to_entry to avoid the
678
entry = fields_to_entry(fields[1:])
679
found.setdefault(path, []).append(entry)
681
# Now we have split up everything into pre, middle, and post, and
682
# we have handled everything that fell in 'middle'.
683
# We add 'post' first, so that we prefer to seek towards the
684
# beginning, so that we will tend to go as early as we need, and
685
# then only seek forward after that.
687
pending.append((after, high, post))
689
pending.append((low, start-1, pre))
691
# Consider that we may want to return the directory entries in sorted
692
# order. For now, we just return them in whatever order we found them,
693
# and leave it up to the caller if they care if it is ordered or not.
696
def _bisect_dirblocks(self, dir_list):
697
"""Bisect through the disk structure to find entries in given dirs.
699
_bisect_dirblocks is meant to find the contents of directories, which
700
differs from _bisect, which only finds individual entries.
702
:param dir_list: A sorted list of directory names ['', 'dir', 'foo'].
703
:return: A map from dir => entries_for_dir
705
# TODO: jam 20070223 A lot of the bisecting logic could be shared
706
# between this and _bisect. It would require parameterizing the
707
# inner loop with a function, though. We should evaluate the
708
# performance difference.
709
self._requires_lock()
710
# We need the file pointer to be right after the initial header block
711
self._read_header_if_needed()
712
# If _dirblock_state was in memory, we should just return info from
713
# there, this function is only meant to handle when we want to read
715
if self._dirblock_state != DirState.NOT_IN_MEMORY:
716
raise AssertionError("bad dirblock state %r" % self._dirblock_state)
717
# The disk representation is generally info + '\0\n\0' at the end. But
718
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
719
# Because it means we can sync on the '\n'
720
state_file = self._state_file
721
file_size = os.fstat(state_file.fileno()).st_size
722
# We end up with 2 extra fields, we should have a trailing '\n' to
723
# ensure that we read the whole record, and we should have a precursur
724
# '' which ensures that we start after the previous '\n'
725
entry_field_count = self._fields_per_entry() + 1
727
low = self._end_of_header
728
high = file_size - 1 # Ignore the final '\0'
729
# Map from dir => entry
732
# Avoid infinite seeking
733
max_count = 30*len(dir_list)
735
# pending is a list of places to look.
736
# each entry is a tuple of low, high, dir_names
737
# low -> the first byte offset to read (inclusive)
738
# high -> the last byte offset (inclusive)
739
# dirs -> The list of directories that should be found in
740
# the [low, high] range
741
pending = [(low, high, dir_list)]
743
page_size = self._bisect_page_size
745
fields_to_entry = self._get_fields_to_entry()
748
low, high, cur_dirs = pending.pop()
750
if not cur_dirs or low >= high:
755
if count > max_count:
756
raise errors.BzrError('Too many seeks, most likely a bug.')
758
mid = max(low, (low+high-page_size)/2)
761
# limit the read size, so we don't end up reading data that we have
763
read_size = min(page_size, (high-mid)+1)
764
block = state_file.read(read_size)
767
entries = block.split('\n')
770
# We didn't find a '\n', so we cannot have found any records.
771
# So put this range back and try again. But we know we have to
772
# increase the page size, because a single read did not contain
773
# a record break (so records must be larger than page_size)
775
pending.append((low, high, cur_dirs))
778
# Check the first and last entries, in case they are partial, or if
779
# we don't care about the rest of this page
781
first_fields = entries[0].split('\0')
782
if len(first_fields) < entry_field_count:
783
# We didn't get the complete first entry
784
# so move start, and grab the next, which
785
# should be a full entry
786
start += len(entries[0])+1
787
first_fields = entries[1].split('\0')
790
if len(first_fields) <= 1:
791
# We didn't even get a dirname here... what do we do?
792
# Try a large page size and repeat this query
794
pending.append((low, high, cur_dirs))
797
# Find what entries we are looking for, which occur before and
798
# after this first record.
800
first_dir = first_fields[1]
801
first_loc = bisect.bisect_left(cur_dirs, first_dir)
803
# These exist before the current location
804
pre = cur_dirs[:first_loc]
805
# These occur after the current location, which may be in the
806
# data we read, or might be after the last entry
807
post = cur_dirs[first_loc:]
809
if post and len(first_fields) >= entry_field_count:
810
# We have records to look at after the first entry
812
# Parse the last entry
813
last_entry_num = len(entries)-1
814
last_fields = entries[last_entry_num].split('\0')
815
if len(last_fields) < entry_field_count:
816
# The very last hunk was not complete,
817
# read the previous hunk
818
after = mid + len(block) - len(entries[-1])
820
last_fields = entries[last_entry_num].split('\0')
822
after = mid + len(block)
824
last_dir = last_fields[1]
825
last_loc = bisect.bisect_right(post, last_dir)
827
middle_files = post[:last_loc]
828
post = post[last_loc:]
831
# We have files that should occur in this block
832
# (>= first, <= last)
833
# Either we will find them here, or we can mark them as
836
if middle_files[0] == first_dir:
837
# We might need to go before this location
838
pre.append(first_dir)
839
if middle_files[-1] == last_dir:
840
post.insert(0, last_dir)
842
# Find out what paths we have
843
paths = {first_dir:[first_fields]}
844
# last_dir might == first_dir so we need to be
845
# careful if we should append rather than overwrite
846
if last_entry_num != first_entry_num:
847
paths.setdefault(last_dir, []).append(last_fields)
848
for num in xrange(first_entry_num+1, last_entry_num):
849
# TODO: jam 20070223 We are already splitting here, so
850
# shouldn't we just split the whole thing rather
851
# than doing the split again in add_one_record?
852
fields = entries[num].split('\0')
853
paths.setdefault(fields[1], []).append(fields)
855
for cur_dir in middle_files:
856
for fields in paths.get(cur_dir, []):
857
# offset by 1 because of the opening '\0'
858
# consider changing fields_to_entry to avoid the
860
entry = fields_to_entry(fields[1:])
861
found.setdefault(cur_dir, []).append(entry)
863
# Now we have split up everything into pre, middle, and post, and
864
# we have handled everything that fell in 'middle'.
865
# We add 'post' first, so that we prefer to seek towards the
866
# beginning, so that we will tend to go as early as we need, and
867
# then only seek forward after that.
869
pending.append((after, high, post))
871
pending.append((low, start-1, pre))
875
def _bisect_recursive(self, paths):
876
"""Bisect for entries for all paths and their children.
878
This will use bisect to find all records for the supplied paths. It
879
will then continue to bisect for any records which are marked as
880
directories. (and renames?)
882
:param paths: A sorted list of (dir, name) pairs
883
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
884
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
886
# Map from (dir, name, file_id) => [tree_info]
889
found_dir_names = set()
891
# Directories that have been read
892
processed_dirs = set()
893
# Get the ball rolling with the first bisect for all entries.
894
newly_found = self._bisect(paths)
897
# Directories that need to be read
899
paths_to_search = set()
900
for entry_list in newly_found.itervalues():
901
for dir_name_id, trees_info in entry_list:
902
found[dir_name_id] = trees_info
903
found_dir_names.add(dir_name_id[:2])
905
for tree_info in trees_info:
906
minikind = tree_info[0]
909
# We already processed this one as a directory,
910
# we don't need to do the extra work again.
912
subdir, name, file_id = dir_name_id
913
path = osutils.pathjoin(subdir, name)
915
if path not in processed_dirs:
916
pending_dirs.add(path)
917
elif minikind == 'r':
918
# Rename, we need to directly search the target
919
# which is contained in the fingerprint column
920
dir_name = osutils.split(tree_info[1])
921
if dir_name[0] in pending_dirs:
922
# This entry will be found in the dir search
924
if dir_name not in found_dir_names:
925
paths_to_search.add(tree_info[1])
926
# Now we have a list of paths to look for directly, and
927
# directory blocks that need to be read.
928
# newly_found is mixing the keys between (dir, name) and path
929
# entries, but that is okay, because we only really care about the
931
newly_found = self._bisect(sorted(paths_to_search))
932
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
933
processed_dirs.update(pending_dirs)
936
def _discard_merge_parents(self):
937
"""Discard any parents trees beyond the first.
939
Note that if this fails the dirstate is corrupted.
941
After this function returns the dirstate contains 2 trees, neither of
944
self._read_header_if_needed()
945
parents = self.get_parent_ids()
948
# only require all dirblocks if we are doing a full-pass removal.
949
self._read_dirblocks_if_needed()
950
dead_patterns = set([('a', 'r'), ('a', 'a'), ('r', 'r'), ('r', 'a')])
951
def iter_entries_removable():
952
for block in self._dirblocks:
953
deleted_positions = []
954
for pos, entry in enumerate(block[1]):
956
if (entry[1][0][0], entry[1][1][0]) in dead_patterns:
957
deleted_positions.append(pos)
958
if deleted_positions:
959
if len(deleted_positions) == len(block[1]):
962
for pos in reversed(deleted_positions):
964
# if the first parent is a ghost:
965
if parents[0] in self.get_ghosts():
966
empty_parent = [DirState.NULL_PARENT_DETAILS]
967
for entry in iter_entries_removable():
968
entry[1][1:] = empty_parent
970
for entry in iter_entries_removable():
974
self._parents = [parents[0]]
975
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
976
self._header_state = DirState.IN_MEMORY_MODIFIED
978
def _empty_parent_info(self):
979
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
982
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
983
"""Ensure a block for dirname exists.
985
This function exists to let callers which know that there is a
986
directory dirname ensure that the block for it exists. This block can
987
fail to exist because of demand loading, or because a directory had no
988
children. In either case it is not an error. It is however an error to
989
call this if there is no parent entry for the directory, and thus the
990
function requires the coordinates of such an entry to be provided.
992
The root row is special cased and can be indicated with a parent block
995
:param parent_block_index: The index of the block in which dirname's row
997
:param parent_row_index: The index in the parent block where the row
999
:param dirname: The utf8 dirname to ensure there is a block for.
1000
:return: The index for the block.
1002
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
1003
# This is the signature of the root row, and the
1004
# contents-of-root row is always index 1
1006
# the basename of the directory must be the end of its full name.
1007
if not (parent_block_index == -1 and
1008
parent_block_index == -1 and dirname == ''):
1009
if not dirname.endswith(
1010
self._dirblocks[parent_block_index][1][parent_row_index][0][1]):
1011
raise AssertionError("bad dirname %r" % dirname)
1012
block_index, present = self._find_block_index_from_key((dirname, '', ''))
1014
## In future, when doing partial parsing, this should load and
1015
# populate the entire block.
1016
self._dirblocks.insert(block_index, (dirname, []))
1019
def _entries_to_current_state(self, new_entries):
1020
"""Load new_entries into self.dirblocks.
1022
Process new_entries into the current state object, making them the active
1023
state. The entries are grouped together by directory to form dirblocks.
1025
:param new_entries: A sorted list of entries. This function does not sort
1026
to prevent unneeded overhead when callers have a sorted list already.
1029
if new_entries[0][0][0:2] != ('', ''):
1030
raise AssertionError(
1031
"Missing root row %r" % (new_entries[0][0],))
1032
# The two blocks here are deliberate: the root block and the
1033
# contents-of-root block.
1034
self._dirblocks = [('', []), ('', [])]
1035
current_block = self._dirblocks[0][1]
1036
current_dirname = ''
1038
append_entry = current_block.append
1039
for entry in new_entries:
1040
if entry[0][0] != current_dirname:
1041
# new block - different dirname
1043
current_dirname = entry[0][0]
1044
self._dirblocks.append((current_dirname, current_block))
1045
append_entry = current_block.append
1046
# append the entry to the current block
1048
self._split_root_dirblock_into_contents()
1050
def _split_root_dirblock_into_contents(self):
1051
"""Split the root dirblocks into root and contents-of-root.
1053
After parsing by path, we end up with root entries and contents-of-root
1054
entries in the same block. This loop splits them out again.
1056
# The above loop leaves the "root block" entries mixed with the
1057
# "contents-of-root block". But we don't want an if check on
1058
# all entries, so instead we just fix it up here.
1059
if self._dirblocks[1] != ('', []):
1060
raise ValueError("bad dirblock start %r" % (self._dirblocks[1],))
1062
contents_of_root_block = []
1063
for entry in self._dirblocks[0][1]:
1064
if not entry[0][1]: # This is a root entry
1065
root_block.append(entry)
1067
contents_of_root_block.append(entry)
1068
self._dirblocks[0] = ('', root_block)
1069
self._dirblocks[1] = ('', contents_of_root_block)
1071
def _entries_for_path(self, path):
1072
"""Return a list with all the entries that match path for all ids."""
1073
dirname, basename = os.path.split(path)
1074
key = (dirname, basename, '')
1075
block_index, present = self._find_block_index_from_key(key)
1077
# the block which should contain path is absent.
1080
block = self._dirblocks[block_index][1]
1081
entry_index, _ = self._find_entry_index(key, block)
1082
# we may need to look at multiple entries at this path: walk while the specific_files match.
1083
while (entry_index < len(block) and
1084
block[entry_index][0][0:2] == key[0:2]):
1085
result.append(block[entry_index])
1089
def _entry_to_line(self, entry):
1090
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
1092
:param entry: An entry_tuple as defined in the module docstring.
1094
entire_entry = list(entry[0])
1095
for tree_number, tree_data in enumerate(entry[1]):
1096
# (minikind, fingerprint, size, executable, tree_specific_string)
1097
entire_entry.extend(tree_data)
1098
# 3 for the key, 5 for the fields per tree.
1099
tree_offset = 3 + tree_number * 5
1101
entire_entry[tree_offset + 0] = tree_data[0]
1103
entire_entry[tree_offset + 2] = str(tree_data[2])
1105
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
1106
return '\0'.join(entire_entry)
1108
def _fields_per_entry(self):
1109
"""How many null separated fields should be in each entry row.
1111
Each line now has an extra '\n' field which is not used
1112
so we just skip over it
1114
3 fields for the key
1115
+ number of fields per tree_data (5) * tree count
1118
tree_count = 1 + self._num_present_parents()
1119
return 3 + 5 * tree_count + 1
1121
def _find_block(self, key, add_if_missing=False):
1122
"""Return the block that key should be present in.
1124
:param key: A dirstate entry key.
1125
:return: The block tuple.
1127
block_index, present = self._find_block_index_from_key(key)
1129
if not add_if_missing:
1130
# check to see if key is versioned itself - we might want to
1131
# add it anyway, because dirs with no entries dont get a
1132
# dirblock at parse time.
1133
# This is an uncommon branch to take: most dirs have children,
1134
# and most code works with versioned paths.
1135
parent_base, parent_name = osutils.split(key[0])
1136
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
1137
# some parent path has not been added - its an error to add
1139
raise errors.NotVersionedError(key[0:2], str(self))
1140
self._dirblocks.insert(block_index, (key[0], []))
1141
return self._dirblocks[block_index]
1143
def _find_block_index_from_key(self, key):
1144
"""Find the dirblock index for a key.
1146
:return: The block index, True if the block for the key is present.
1148
if key[0:2] == ('', ''):
1151
if (self._last_block_index is not None and
1152
self._dirblocks[self._last_block_index][0] == key[0]):
1153
return self._last_block_index, True
1156
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
1157
cache=self._split_path_cache)
1158
# _right returns one-past-where-key is so we have to subtract
1159
# one to use it. we use _right here because there are two
1160
# '' blocks - the root, and the contents of root
1161
# we always have a minimum of 2 in self._dirblocks: root and
1162
# root-contents, and for '', we get 2 back, so this is
1163
# simple and correct:
1164
present = (block_index < len(self._dirblocks) and
1165
self._dirblocks[block_index][0] == key[0])
1166
self._last_block_index = block_index
1167
# Reset the entry index cache to the beginning of the block.
1168
self._last_entry_index = -1
1169
return block_index, present
1171
def _find_entry_index(self, key, block):
1172
"""Find the entry index for a key in a block.
1174
:return: The entry index, True if the entry for the key is present.
1176
len_block = len(block)
1178
if self._last_entry_index is not None:
1180
entry_index = self._last_entry_index + 1
1181
# A hit is when the key is after the last slot, and before or
1182
# equal to the next slot.
1183
if ((entry_index > 0 and block[entry_index - 1][0] < key) and
1184
key <= block[entry_index][0]):
1185
self._last_entry_index = entry_index
1186
present = (block[entry_index][0] == key)
1187
return entry_index, present
1190
entry_index = bisect.bisect_left(block, (key, []))
1191
present = (entry_index < len_block and
1192
block[entry_index][0] == key)
1193
self._last_entry_index = entry_index
1194
return entry_index, present
1197
def from_tree(tree, dir_state_filename):
1198
"""Create a dirstate from a bzr Tree.
1200
:param tree: The tree which should provide parent information and
1202
:return: a DirState object which is currently locked for writing.
1203
(it was locked by DirState.initialize)
1205
result = DirState.initialize(dir_state_filename)
1209
parent_ids = tree.get_parent_ids()
1210
num_parents = len(parent_ids)
1212
for parent_id in parent_ids:
1213
parent_tree = tree.branch.repository.revision_tree(parent_id)
1214
parent_trees.append((parent_id, parent_tree))
1215
parent_tree.lock_read()
1216
result.set_parent_trees(parent_trees, [])
1217
result.set_state_from_inventory(tree.inventory)
1219
for revid, parent_tree in parent_trees:
1220
parent_tree.unlock()
1223
# The caller won't have a chance to unlock this, so make sure we
1229
def update_by_delta(self, delta):
1230
"""Apply an inventory delta to the dirstate for tree 0
1232
:param delta: An inventory delta. See Inventory.apply_delta for
1235
self._read_dirblocks_if_needed()
1238
for old_path, new_path, file_id, inv_entry in sorted(delta, reverse=True):
1239
if (file_id in insertions) or (file_id in removals):
1240
raise AssertionError("repeated file id in delta %r" % (file_id,))
1241
if old_path is not None:
1242
old_path = old_path.encode('utf-8')
1243
removals[file_id] = old_path
1244
if new_path is not None:
1245
new_path = new_path.encode('utf-8')
1246
dirname, basename = osutils.split(new_path)
1247
key = (dirname, basename, file_id)
1248
minikind = DirState._kind_to_minikind[inv_entry.kind]
1250
fingerprint = inv_entry.reference_revision
1253
insertions[file_id] = (key, minikind, inv_entry.executable,
1254
fingerprint, new_path)
1255
# Transform moves into delete+add pairs
1256
if None not in (old_path, new_path):
1257
for child in self._iter_child_entries(0, old_path):
1258
if child[0][2] in insertions or child[0][2] in removals:
1260
child_dirname = child[0][0]
1261
child_basename = child[0][1]
1262
minikind = child[1][0][0]
1263
fingerprint = child[1][0][4]
1264
executable = child[1][0][3]
1265
old_child_path = osutils.pathjoin(child[0][0],
1267
removals[child[0][2]] = old_child_path
1268
child_suffix = child_dirname[len(old_path):]
1269
new_child_dirname = (new_path + child_suffix)
1270
key = (new_child_dirname, child_basename, child[0][2])
1271
new_child_path = os.path.join(new_child_dirname,
1273
insertions[child[0][2]] = (key, minikind, executable,
1274
fingerprint, new_child_path)
1275
self._apply_removals(removals.values())
1276
self._apply_insertions(insertions.values())
1278
def _apply_removals(self, removals):
1279
for path in sorted(removals, reverse=True):
1280
dirname, basename = osutils.split(path)
1281
block_i, entry_i, d_present, f_present = \
1282
self._get_block_entry_index(dirname, basename, 0)
1283
entry = self._dirblocks[block_i][1][entry_i]
1284
self._make_absent(entry)
1285
# See if we have a malformed delta: deleting a directory must not
1286
# leave crud behind. This increases the number of bisects needed
1287
# substantially, but deletion or renames of large numbers of paths
1288
# is rare enough it shouldn't be an issue (famous last words?) RBC
1290
block_i, entry_i, d_present, f_present = \
1291
self._get_block_entry_index(path, '', 0)
1293
# The dir block is still present in the dirstate; this could
1294
# be due to it being in a parent tree, or a corrupt delta.
1295
for child_entry in self._dirblocks[block_i][1]:
1296
if child_entry[1][0][0] not in ('r', 'a'):
1297
raise errors.InconsistentDelta(path, entry[0][2],
1298
"The file id was deleted but its children were "
1301
def _apply_insertions(self, adds):
1302
for key, minikind, executable, fingerprint, path_utf8 in sorted(adds):
1303
self.update_minimal(key, minikind, executable, fingerprint,
1304
path_utf8=path_utf8)
1306
def update_basis_by_delta(self, delta, new_revid):
1307
"""Update the parents of this tree after a commit.
1309
This gives the tree one parent, with revision id new_revid. The
1310
inventory delta is applied to the current basis tree to generate the
1311
inventory for the parent new_revid, and all other parent trees are
1314
Note that an exception during the operation of this method will leave
1315
the dirstate in a corrupt state where it should not be saved.
1317
Finally, we expect all changes to be synchronising the basis tree with
1320
:param new_revid: The new revision id for the trees parent.
1321
:param delta: An inventory delta (see apply_inventory_delta) describing
1322
the changes from the current left most parent revision to new_revid.
1324
self._read_dirblocks_if_needed()
1325
self._discard_merge_parents()
1326
if self._ghosts != []:
1327
raise NotImplementedError(self.update_basis_by_delta)
1328
if len(self._parents) == 0:
1329
# setup a blank tree, the most simple way.
1330
empty_parent = DirState.NULL_PARENT_DETAILS
1331
for entry in self._iter_entries():
1332
entry[1].append(empty_parent)
1333
self._parents.append(new_revid)
1335
self._parents[0] = new_revid
1337
delta = sorted(delta, reverse=True)
1341
# The paths this function accepts are unicode and must be encoded as we
1343
encode = cache_utf8.encode
1344
inv_to_entry = self._inv_entry_to_details
1345
# delta is now (deletes, changes), (adds) in reverse lexographical
1347
# deletes in reverse lexographic order are safe to process in situ.
1348
# renames are not, as a rename from any path could go to a path
1349
# lexographically lower, so we transform renames into delete, add pairs,
1350
# expanding them recursively as needed.
1351
# At the same time, to reduce interface friction we convert the input
1352
# inventory entries to dirstate.
1353
root_only = ('', '')
1354
for old_path, new_path, file_id, inv_entry in delta:
1355
if old_path is None:
1356
adds.append((None, encode(new_path), file_id,
1357
inv_to_entry(inv_entry), True))
1358
elif new_path is None:
1359
deletes.append((encode(old_path), None, file_id, None, True))
1360
elif (old_path, new_path) != root_only:
1362
# Because renames must preserve their children we must have
1363
# processed all relocations and removes before hand. The sort
1364
# order ensures we've examined the child paths, but we also
1365
# have to execute the removals, or the split to an add/delete
1366
# pair will result in the deleted item being reinserted, or
1367
# renamed items being reinserted twice - and possibly at the
1368
# wrong place. Splitting into a delete/add pair also simplifies
1369
# the handling of entries with ('f', ...), ('r' ...) because
1370
# the target of the 'r' is old_path here, and we add that to
1371
# deletes, meaning that the add handler does not need to check
1372
# for 'r' items on every pass.
1373
self._update_basis_apply_deletes(deletes)
1375
new_path_utf8 = encode(new_path)
1376
# Split into an add/delete pair recursively.
1377
adds.append((None, new_path_utf8, file_id,
1378
inv_to_entry(inv_entry), False))
1379
# Expunge deletes that we've seen so that deleted/renamed
1380
# children of a rename directory are handled correctly.
1381
new_deletes = reversed(list(self._iter_child_entries(1,
1383
# Remove the current contents of the tree at orig_path, and
1384
# reinsert at the correct new path.
1385
for entry in new_deletes:
1387
source_path = entry[0][0] + '/' + entry[0][1]
1389
source_path = entry[0][1]
1391
target_path = new_path_utf8 + source_path[len(old_path):]
1394
raise AssertionError("cannot rename directory to"
1396
target_path = source_path[len(old_path) + 1:]
1397
adds.append((None, target_path, entry[0][2], entry[1][1], False))
1399
(source_path, target_path, entry[0][2], None, False))
1401
(encode(old_path), new_path, file_id, None, False))
1403
# changes to just the root should not require remove/insertion
1405
changes.append((encode(old_path), encode(new_path), file_id,
1406
inv_to_entry(inv_entry)))
1408
# Finish expunging deletes/first half of renames.
1409
self._update_basis_apply_deletes(deletes)
1410
# Reinstate second half of renames and new paths.
1411
self._update_basis_apply_adds(adds)
1412
# Apply in-situ changes.
1413
self._update_basis_apply_changes(changes)
1415
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1416
self._header_state = DirState.IN_MEMORY_MODIFIED
1417
self._id_index = None
1420
def _update_basis_apply_adds(self, adds):
1421
"""Apply a sequence of adds to tree 1 during update_basis_by_delta.
1423
They may be adds, or renames that have been split into add/delete
1426
:param adds: A sequence of adds. Each add is a tuple:
1427
(None, new_path_utf8, file_id, (entry_details), real_add). real_add
1428
is False when the add is the second half of a remove-and-reinsert
1429
pair created to handle renames and deletes.
1431
# Adds are accumulated partly from renames, so can be in any input
1434
# adds is now in lexographic order, which places all parents before
1435
# their children, so we can process it linearly.
1437
for old_path, new_path, file_id, new_details, real_add in adds:
1438
# the entry for this file_id must be in tree 0.
1439
entry = self._get_entry(0, file_id, new_path)
1440
if entry[0] is None or entry[0][2] != file_id:
1441
self._changes_aborted = True
1442
raise errors.InconsistentDelta(new_path, file_id,
1443
'working tree does not contain new entry')
1444
if real_add and entry[1][1][0] not in absent:
1445
self._changes_aborted = True
1446
raise errors.InconsistentDelta(new_path, file_id,
1447
'The entry was considered to be a genuinely new record,'
1448
' but there was already an old record for it.')
1449
# We don't need to update the target of an 'r' because the handling
1450
# of renames turns all 'r' situations into a delete at the original
1452
entry[1][1] = new_details
1454
def _update_basis_apply_changes(self, changes):
1455
"""Apply a sequence of changes to tree 1 during update_basis_by_delta.
1457
:param adds: A sequence of changes. Each change is a tuple:
1458
(path_utf8, path_utf8, file_id, (entry_details))
1461
for old_path, new_path, file_id, new_details in changes:
1462
# the entry for this file_id must be in tree 0.
1463
entry = self._get_entry(0, file_id, new_path)
1464
if entry[0] is None or entry[0][2] != file_id:
1465
self._changes_aborted = True
1466
raise errors.InconsistentDelta(new_path, file_id,
1467
'working tree does not contain new entry')
1468
if (entry[1][0][0] in absent or
1469
entry[1][1][0] in absent):
1470
self._changes_aborted = True
1471
raise errors.InconsistentDelta(new_path, file_id,
1472
'changed considered absent')
1473
entry[1][1] = new_details
1475
def _update_basis_apply_deletes(self, deletes):
1476
"""Apply a sequence of deletes to tree 1 during update_basis_by_delta.
1478
They may be deletes, or renames that have been split into add/delete
1481
:param deletes: A sequence of deletes. Each delete is a tuple:
1482
(old_path_utf8, new_path_utf8, file_id, None, real_delete).
1483
real_delete is True when the desired outcome is an actual deletion
1484
rather than the rename handling logic temporarily deleting a path
1485
during the replacement of a parent.
1487
null = DirState.NULL_PARENT_DETAILS
1488
for old_path, new_path, file_id, _, real_delete in deletes:
1489
if real_delete != (new_path is None):
1490
raise AssertionError("bad delete delta")
1491
# the entry for this file_id must be in tree 1.
1492
dirname, basename = osutils.split(old_path)
1493
block_index, entry_index, dir_present, file_present = \
1494
self._get_block_entry_index(dirname, basename, 1)
1495
if not file_present:
1496
self._changes_aborted = True
1497
raise errors.InconsistentDelta(old_path, file_id,
1498
'basis tree does not contain removed entry')
1499
entry = self._dirblocks[block_index][1][entry_index]
1500
if entry[0][2] != file_id:
1501
self._changes_aborted = True
1502
raise errors.InconsistentDelta(old_path, file_id,
1503
'mismatched file_id in tree 1')
1505
if entry[1][0][0] != 'a':
1506
self._changes_aborted = True
1507
raise errors.InconsistentDelta(old_path, file_id,
1508
'This was marked as a real delete, but the WT state'
1509
' claims that it still exists and is versioned.')
1510
del self._dirblocks[block_index][1][entry_index]
1512
if entry[1][0][0] == 'a':
1513
self._changes_aborted = True
1514
raise errors.InconsistentDelta(old_path, file_id,
1515
'The entry was considered a rename, but the source path'
1516
' is marked as absent.')
1517
# For whatever reason, we were asked to rename an entry
1518
# that was originally marked as deleted. This could be
1519
# because we are renaming the parent directory, and the WT
1520
# current state has the file marked as deleted.
1521
elif entry[1][0][0] == 'r':
1522
# implement the rename
1523
del self._dirblocks[block_index][1][entry_index]
1525
# it is being resurrected here, so blank it out temporarily.
1526
self._dirblocks[block_index][1][entry_index][1][1] = null
1528
def _observed_sha1(self, entry, sha1, stat_value,
1529
_stat_to_minikind=_stat_to_minikind, _pack_stat=pack_stat):
1530
"""Note the sha1 of a file.
1532
:param entry: The entry the sha1 is for.
1533
:param sha1: The observed sha1.
1534
:param stat_value: The os.lstat for the file.
1537
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
1541
packed_stat = _pack_stat(stat_value)
1543
if self._cutoff_time is None:
1544
self._sha_cutoff_time()
1545
if (stat_value.st_mtime < self._cutoff_time
1546
and stat_value.st_ctime < self._cutoff_time):
1547
entry[1][0] = ('f', sha1, entry[1][0][2], entry[1][0][3],
1549
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1551
def _sha_cutoff_time(self):
1552
"""Return cutoff time.
1554
Files modified more recently than this time are at risk of being
1555
undetectably modified and so can't be cached.
1557
# Cache the cutoff time as long as we hold a lock.
1558
# time.time() isn't super expensive (approx 3.38us), but
1559
# when you call it 50,000 times it adds up.
1560
# For comparison, os.lstat() costs 7.2us if it is hot.
1561
self._cutoff_time = int(time.time()) - 3
1562
return self._cutoff_time
1564
def _lstat(self, abspath, entry):
1565
"""Return the os.lstat value for this path."""
1566
return os.lstat(abspath)
1568
def _sha1_file_and_mutter(self, abspath):
1569
# when -Dhashcache is turned on, this is monkey-patched in to log
1571
trace.mutter("dirstate sha1 " + abspath)
1572
return osutils.sha_file_by_name(abspath)
1574
def _is_executable(self, mode, old_executable):
1575
"""Is this file executable?"""
1576
return bool(S_IEXEC & mode)
1578
def _is_executable_win32(self, mode, old_executable):
1579
"""On win32 the executable bit is stored in the dirstate."""
1580
return old_executable
1582
if sys.platform == 'win32':
1583
_is_executable = _is_executable_win32
1585
def _read_link(self, abspath, old_link):
1586
"""Read the target of a symlink"""
1587
# TODO: jam 200700301 On Win32, this could just return the value
1588
# already in memory. However, this really needs to be done at a
1589
# higher level, because there either won't be anything on disk,
1590
# or the thing on disk will be a file.
1591
fs_encoding = osutils._fs_enc
1592
if isinstance(abspath, unicode):
1593
# abspath is defined as the path to pass to lstat. readlink is
1594
# buggy in python < 2.6 (it doesn't encode unicode path into FS
1595
# encoding), so we need to encode ourselves knowing that unicode
1596
# paths are produced by UnicodeDirReader on purpose.
1597
abspath = abspath.encode(fs_encoding)
1598
target = os.readlink(abspath)
1599
if fs_encoding not in ('UTF-8', 'US-ASCII', 'ANSI_X3.4-1968'):
1600
# Change encoding if needed
1601
target = target.decode(fs_encoding).encode('UTF-8')
1604
def get_ghosts(self):
1605
"""Return a list of the parent tree revision ids that are ghosts."""
1606
self._read_header_if_needed()
1609
def get_lines(self):
1610
"""Serialise the entire dirstate to a sequence of lines."""
1611
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1612
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1613
# read whats on disk.
1614
self._state_file.seek(0)
1615
return self._state_file.readlines()
1617
lines.append(self._get_parents_line(self.get_parent_ids()))
1618
lines.append(self._get_ghosts_line(self._ghosts))
1619
# append the root line which is special cased
1620
lines.extend(map(self._entry_to_line, self._iter_entries()))
1621
return self._get_output_lines(lines)
1623
def _get_ghosts_line(self, ghost_ids):
1624
"""Create a line for the state file for ghost information."""
1625
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1627
def _get_parents_line(self, parent_ids):
1628
"""Create a line for the state file for parents information."""
1629
return '\0'.join([str(len(parent_ids))] + parent_ids)
1631
def _get_fields_to_entry(self):
1632
"""Get a function which converts entry fields into a entry record.
1634
This handles size and executable, as well as parent records.
1636
:return: A function which takes a list of fields, and returns an
1637
appropriate record for storing in memory.
1639
# This is intentionally unrolled for performance
1640
num_present_parents = self._num_present_parents()
1641
if num_present_parents == 0:
1642
def fields_to_entry_0_parents(fields, _int=int):
1643
path_name_file_id_key = (fields[0], fields[1], fields[2])
1644
return (path_name_file_id_key, [
1646
fields[3], # minikind
1647
fields[4], # fingerprint
1648
_int(fields[5]), # size
1649
fields[6] == 'y', # executable
1650
fields[7], # packed_stat or revision_id
1652
return fields_to_entry_0_parents
1653
elif num_present_parents == 1:
1654
def fields_to_entry_1_parent(fields, _int=int):
1655
path_name_file_id_key = (fields[0], fields[1], fields[2])
1656
return (path_name_file_id_key, [
1658
fields[3], # minikind
1659
fields[4], # fingerprint
1660
_int(fields[5]), # size
1661
fields[6] == 'y', # executable
1662
fields[7], # packed_stat or revision_id
1665
fields[8], # minikind
1666
fields[9], # fingerprint
1667
_int(fields[10]), # size
1668
fields[11] == 'y', # executable
1669
fields[12], # packed_stat or revision_id
1672
return fields_to_entry_1_parent
1673
elif num_present_parents == 2:
1674
def fields_to_entry_2_parents(fields, _int=int):
1675
path_name_file_id_key = (fields[0], fields[1], fields[2])
1676
return (path_name_file_id_key, [
1678
fields[3], # minikind
1679
fields[4], # fingerprint
1680
_int(fields[5]), # size
1681
fields[6] == 'y', # executable
1682
fields[7], # packed_stat or revision_id
1685
fields[8], # minikind
1686
fields[9], # fingerprint
1687
_int(fields[10]), # size
1688
fields[11] == 'y', # executable
1689
fields[12], # packed_stat or revision_id
1692
fields[13], # minikind
1693
fields[14], # fingerprint
1694
_int(fields[15]), # size
1695
fields[16] == 'y', # executable
1696
fields[17], # packed_stat or revision_id
1699
return fields_to_entry_2_parents
1701
def fields_to_entry_n_parents(fields, _int=int):
1702
path_name_file_id_key = (fields[0], fields[1], fields[2])
1703
trees = [(fields[cur], # minikind
1704
fields[cur+1], # fingerprint
1705
_int(fields[cur+2]), # size
1706
fields[cur+3] == 'y', # executable
1707
fields[cur+4], # stat or revision_id
1708
) for cur in xrange(3, len(fields)-1, 5)]
1709
return path_name_file_id_key, trees
1710
return fields_to_entry_n_parents
1712
def get_parent_ids(self):
1713
"""Return a list of the parent tree ids for the directory state."""
1714
self._read_header_if_needed()
1715
return list(self._parents)
1717
def _get_block_entry_index(self, dirname, basename, tree_index):
1718
"""Get the coordinates for a path in the state structure.
1720
:param dirname: The utf8 dirname to lookup.
1721
:param basename: The utf8 basename to lookup.
1722
:param tree_index: The index of the tree for which this lookup should
1724
:return: A tuple describing where the path is located, or should be
1725
inserted. The tuple contains four fields: the block index, the row
1726
index, the directory is present (boolean), the entire path is
1727
present (boolean). There is no guarantee that either
1728
coordinate is currently reachable unless the found field for it is
1729
True. For instance, a directory not present in the searched tree
1730
may be returned with a value one greater than the current highest
1731
block offset. The directory present field will always be True when
1732
the path present field is True. The directory present field does
1733
NOT indicate that the directory is present in the searched tree,
1734
rather it indicates that there are at least some files in some
1737
self._read_dirblocks_if_needed()
1738
key = dirname, basename, ''
1739
block_index, present = self._find_block_index_from_key(key)
1741
# no such directory - return the dir index and 0 for the row.
1742
return block_index, 0, False, False
1743
block = self._dirblocks[block_index][1] # access the entries only
1744
entry_index, present = self._find_entry_index(key, block)
1745
# linear search through entries at this path to find the one
1747
while entry_index < len(block) and block[entry_index][0][1] == basename:
1748
if block[entry_index][1][tree_index][0] not in 'ar':
1749
# neither absent or relocated
1750
return block_index, entry_index, True, True
1752
return block_index, entry_index, True, False
1754
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None, include_deleted=False):
1755
"""Get the dirstate entry for path in tree tree_index.
1757
If either file_id or path is supplied, it is used as the key to lookup.
1758
If both are supplied, the fastest lookup is used, and an error is
1759
raised if they do not both point at the same row.
1761
:param tree_index: The index of the tree we wish to locate this path
1762
in. If the path is present in that tree, the entry containing its
1763
details is returned, otherwise (None, None) is returned
1764
0 is the working tree, higher indexes are successive parent
1766
:param fileid_utf8: A utf8 file_id to look up.
1767
:param path_utf8: An utf8 path to be looked up.
1768
:param include_deleted: If True, and performing a lookup via
1769
fileid_utf8 rather than path_utf8, return an entry for deleted
1771
:return: The dirstate entry tuple for path, or (None, None)
1773
self._read_dirblocks_if_needed()
1774
if path_utf8 is not None:
1775
if type(path_utf8) is not str:
1776
raise AssertionError('path_utf8 is not a str: %s %s'
1777
% (type(path_utf8), path_utf8))
1778
# path lookups are faster
1779
dirname, basename = osutils.split(path_utf8)
1780
block_index, entry_index, dir_present, file_present = \
1781
self._get_block_entry_index(dirname, basename, tree_index)
1782
if not file_present:
1784
entry = self._dirblocks[block_index][1][entry_index]
1785
if not (entry[0][2] and entry[1][tree_index][0] not in ('a', 'r')):
1786
raise AssertionError('unversioned entry?')
1788
if entry[0][2] != fileid_utf8:
1789
self._changes_aborted = True
1790
raise errors.BzrError('integrity error ? : mismatching'
1791
' tree_index, file_id and path')
1794
possible_keys = self._get_id_index().get(fileid_utf8, None)
1795
if not possible_keys:
1797
for key in possible_keys:
1798
block_index, present = \
1799
self._find_block_index_from_key(key)
1800
# strange, probably indicates an out of date
1801
# id index - for now, allow this.
1804
# WARNING: DO not change this code to use _get_block_entry_index
1805
# as that function is not suitable: it does not use the key
1806
# to lookup, and thus the wrong coordinates are returned.
1807
block = self._dirblocks[block_index][1]
1808
entry_index, present = self._find_entry_index(key, block)
1810
entry = self._dirblocks[block_index][1][entry_index]
1811
if entry[1][tree_index][0] in 'fdlt':
1812
# this is the result we are looking for: the
1813
# real home of this file_id in this tree.
1815
if entry[1][tree_index][0] == 'a':
1816
# there is no home for this entry in this tree
1820
if entry[1][tree_index][0] != 'r':
1821
raise AssertionError(
1822
"entry %r has invalid minikind %r for tree %r" \
1824
entry[1][tree_index][0],
1826
real_path = entry[1][tree_index][1]
1827
return self._get_entry(tree_index, fileid_utf8=fileid_utf8,
1828
path_utf8=real_path)
1832
def initialize(cls, path):
1833
"""Create a new dirstate on path.
1835
The new dirstate will be an empty tree - that is it has no parents,
1836
and only a root node - which has id ROOT_ID.
1838
:param path: The name of the file for the dirstate.
1839
:return: A write-locked DirState object.
1841
# This constructs a new DirState object on a path, sets the _state_file
1842
# to a new empty file for that path. It then calls _set_data() with our
1843
# stock empty dirstate information - a root with ROOT_ID, no children,
1844
# and no parents. Finally it calls save() to ensure that this data will
1847
# root dir and root dir contents with no children.
1848
empty_tree_dirblocks = [('', []), ('', [])]
1849
# a new root directory, with a NULLSTAT.
1850
empty_tree_dirblocks[0][1].append(
1851
(('', '', inventory.ROOT_ID), [
1852
('d', '', 0, False, DirState.NULLSTAT),
1856
result._set_data([], empty_tree_dirblocks)
1864
def _inv_entry_to_details(inv_entry):
1865
"""Convert an inventory entry (from a revision tree) to state details.
1867
:param inv_entry: An inventory entry whose sha1 and link targets can be
1868
relied upon, and which has a revision set.
1869
:return: A details tuple - the details for a single tree at a path +
1872
kind = inv_entry.kind
1873
minikind = DirState._kind_to_minikind[kind]
1874
tree_data = inv_entry.revision
1875
if kind == 'directory':
1879
elif kind == 'symlink':
1880
if inv_entry.symlink_target is None:
1883
fingerprint = inv_entry.symlink_target.encode('utf8')
1886
elif kind == 'file':
1887
fingerprint = inv_entry.text_sha1 or ''
1888
size = inv_entry.text_size or 0
1889
executable = inv_entry.executable
1890
elif kind == 'tree-reference':
1891
fingerprint = inv_entry.reference_revision or ''
1895
raise Exception("can't pack %s" % inv_entry)
1896
return (minikind, fingerprint, size, executable, tree_data)
1898
def _iter_child_entries(self, tree_index, path_utf8):
1899
"""Iterate over all the entries that are children of path_utf.
1901
This only returns entries that are present (not in 'a', 'r') in
1902
tree_index. tree_index data is not refreshed, so if tree 0 is used,
1903
results may differ from that obtained if paths were statted to
1904
determine what ones were directories.
1906
Asking for the children of a non-directory will return an empty
1910
next_pending_dirs = [path_utf8]
1912
while next_pending_dirs:
1913
pending_dirs = next_pending_dirs
1914
next_pending_dirs = []
1915
for path in pending_dirs:
1916
block_index, present = self._find_block_index_from_key(
1918
if block_index == 0:
1920
if len(self._dirblocks) == 1:
1921
# asked for the children of the root with no other
1925
# children of a non-directory asked for.
1927
block = self._dirblocks[block_index]
1928
for entry in block[1]:
1929
kind = entry[1][tree_index][0]
1930
if kind not in absent:
1934
path = entry[0][0] + '/' + entry[0][1]
1937
next_pending_dirs.append(path)
1939
def _iter_entries(self):
1940
"""Iterate over all the entries in the dirstate.
1942
Each yelt item is an entry in the standard format described in the
1943
docstring of bzrlib.dirstate.
1945
self._read_dirblocks_if_needed()
1946
for directory in self._dirblocks:
1947
for entry in directory[1]:
1950
def _get_id_index(self):
1951
"""Get an id index of self._dirblocks."""
1952
if self._id_index is None:
1954
for key, tree_details in self._iter_entries():
1955
id_index.setdefault(key[2], set()).add(key)
1956
self._id_index = id_index
1957
return self._id_index
1959
def _get_output_lines(self, lines):
1960
"""Format lines for final output.
1962
:param lines: A sequence of lines containing the parents list and the
1965
output_lines = [DirState.HEADER_FORMAT_3]
1966
lines.append('') # a final newline
1967
inventory_text = '\0\n\0'.join(lines)
1968
output_lines.append('crc32: %s\n' % (zlib.crc32(inventory_text),))
1969
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
1970
num_entries = len(lines)-3
1971
output_lines.append('num_entries: %s\n' % (num_entries,))
1972
output_lines.append(inventory_text)
1975
def _make_deleted_row(self, fileid_utf8, parents):
1976
"""Return a deleted row for fileid_utf8."""
1977
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
1980
def _num_present_parents(self):
1981
"""The number of parent entries in each record row."""
1982
return len(self._parents) - len(self._ghosts)
1986
"""Construct a DirState on the file at path path.
1988
:return: An unlocked DirState object, associated with the given path.
1990
result = DirState(path)
1993
def _read_dirblocks_if_needed(self):
1994
"""Read in all the dirblocks from the file if they are not in memory.
1996
This populates self._dirblocks, and sets self._dirblock_state to
1997
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
2000
self._read_header_if_needed()
2001
if self._dirblock_state == DirState.NOT_IN_MEMORY:
2002
_read_dirblocks(self)
2004
def _read_header(self):
2005
"""This reads in the metadata header, and the parent ids.
2007
After reading in, the file should be positioned at the null
2008
just before the start of the first record in the file.
2010
:return: (expected crc checksum, number of entries, parent list)
2012
self._read_prelude()
2013
parent_line = self._state_file.readline()
2014
info = parent_line.split('\0')
2015
num_parents = int(info[0])
2016
self._parents = info[1:-1]
2017
ghost_line = self._state_file.readline()
2018
info = ghost_line.split('\0')
2019
num_ghosts = int(info[1])
2020
self._ghosts = info[2:-1]
2021
self._header_state = DirState.IN_MEMORY_UNMODIFIED
2022
self._end_of_header = self._state_file.tell()
2024
def _read_header_if_needed(self):
2025
"""Read the header of the dirstate file if needed."""
2026
# inline this as it will be called a lot
2027
if not self._lock_token:
2028
raise errors.ObjectNotLocked(self)
2029
if self._header_state == DirState.NOT_IN_MEMORY:
2032
def _read_prelude(self):
2033
"""Read in the prelude header of the dirstate file.
2035
This only reads in the stuff that is not connected to the crc
2036
checksum. The position will be correct to read in the rest of
2037
the file and check the checksum after this point.
2038
The next entry in the file should be the number of parents,
2039
and their ids. Followed by a newline.
2041
header = self._state_file.readline()
2042
if header != DirState.HEADER_FORMAT_3:
2043
raise errors.BzrError(
2044
'invalid header line: %r' % (header,))
2045
crc_line = self._state_file.readline()
2046
if not crc_line.startswith('crc32: '):
2047
raise errors.BzrError('missing crc32 checksum: %r' % crc_line)
2048
self.crc_expected = int(crc_line[len('crc32: '):-1])
2049
num_entries_line = self._state_file.readline()
2050
if not num_entries_line.startswith('num_entries: '):
2051
raise errors.BzrError('missing num_entries line')
2052
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
2054
def sha1_from_stat(self, path, stat_result, _pack_stat=pack_stat):
2055
"""Find a sha1 given a stat lookup."""
2056
return self._get_packed_stat_index().get(_pack_stat(stat_result), None)
2058
def _get_packed_stat_index(self):
2059
"""Get a packed_stat index of self._dirblocks."""
2060
if self._packed_stat_index is None:
2062
for key, tree_details in self._iter_entries():
2063
if tree_details[0][0] == 'f':
2064
index[tree_details[0][4]] = tree_details[0][1]
2065
self._packed_stat_index = index
2066
return self._packed_stat_index
2069
"""Save any pending changes created during this session.
2071
We reuse the existing file, because that prevents race conditions with
2072
file creation, and use oslocks on it to prevent concurrent modification
2073
and reads - because dirstate's incremental data aggregation is not
2074
compatible with reading a modified file, and replacing a file in use by
2075
another process is impossible on Windows.
2077
A dirstate in read only mode should be smart enough though to validate
2078
that the file has not changed, and otherwise discard its cache and
2079
start over, to allow for fine grained read lock duration, so 'status'
2080
wont block 'commit' - for example.
2082
if self._changes_aborted:
2083
# Should this be a warning? For now, I'm expecting that places that
2084
# mark it inconsistent will warn, making a warning here redundant.
2085
trace.mutter('Not saving DirState because '
2086
'_changes_aborted is set.')
2088
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
2089
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
2091
grabbed_write_lock = False
2092
if self._lock_state != 'w':
2093
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
2094
# Switch over to the new lock, as the old one may be closed.
2095
# TODO: jam 20070315 We should validate the disk file has
2096
# not changed contents. Since temporary_write_lock may
2097
# not be an atomic operation.
2098
self._lock_token = new_lock
2099
self._state_file = new_lock.f
2100
if not grabbed_write_lock:
2101
# We couldn't grab a write lock, so we switch back to a read one
2104
self._state_file.seek(0)
2105
self._state_file.writelines(self.get_lines())
2106
self._state_file.truncate()
2107
self._state_file.flush()
2108
self._header_state = DirState.IN_MEMORY_UNMODIFIED
2109
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
2111
if grabbed_write_lock:
2112
self._lock_token = self._lock_token.restore_read_lock()
2113
self._state_file = self._lock_token.f
2114
# TODO: jam 20070315 We should validate the disk file has
2115
# not changed contents. Since restore_read_lock may
2116
# not be an atomic operation.
2118
def _set_data(self, parent_ids, dirblocks):
2119
"""Set the full dirstate data in memory.
2121
This is an internal function used to completely replace the objects
2122
in memory state. It puts the dirstate into state 'full-dirty'.
2124
:param parent_ids: A list of parent tree revision ids.
2125
:param dirblocks: A list containing one tuple for each directory in the
2126
tree. Each tuple contains the directory path and a list of entries
2127
found in that directory.
2129
# our memory copy is now authoritative.
2130
self._dirblocks = dirblocks
2131
self._header_state = DirState.IN_MEMORY_MODIFIED
2132
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2133
self._parents = list(parent_ids)
2134
self._id_index = None
2135
self._packed_stat_index = None
2137
def set_path_id(self, path, new_id):
2138
"""Change the id of path to new_id in the current working tree.
2140
:param path: The path inside the tree to set - '' is the root, 'foo'
2141
is the path foo in the root.
2142
:param new_id: The new id to assign to the path. This must be a utf8
2143
file id (not unicode, and not None).
2145
self._read_dirblocks_if_needed()
2147
# TODO: logic not written
2148
raise NotImplementedError(self.set_path_id)
2149
# TODO: check new id is unique
2150
entry = self._get_entry(0, path_utf8=path)
2151
if entry[0][2] == new_id:
2152
# Nothing to change.
2154
# mark the old path absent, and insert a new root path
2155
self._make_absent(entry)
2156
self.update_minimal(('', '', new_id), 'd',
2157
path_utf8='', packed_stat=entry[1][0][4])
2158
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2159
if self._id_index is not None:
2160
self._id_index.setdefault(new_id, set()).add(entry[0])
2162
def set_parent_trees(self, trees, ghosts):
2163
"""Set the parent trees for the dirstate.
2165
:param trees: A list of revision_id, tree tuples. tree must be provided
2166
even if the revision_id refers to a ghost: supply an empty tree in
2168
:param ghosts: A list of the revision_ids that are ghosts at the time
2171
# TODO: generate a list of parent indexes to preserve to save
2172
# processing specific parent trees. In the common case one tree will
2173
# be preserved - the left most parent.
2174
# TODO: if the parent tree is a dirstate, we might want to walk them
2175
# all by path in parallel for 'optimal' common-case performance.
2176
# generate new root row.
2177
self._read_dirblocks_if_needed()
2178
# TODO future sketch: Examine the existing parents to generate a change
2179
# map and then walk the new parent trees only, mapping them into the
2180
# dirstate. Walk the dirstate at the same time to remove unreferenced
2183
# sketch: loop over all entries in the dirstate, cherry picking
2184
# entries from the parent trees, if they are not ghost trees.
2185
# after we finish walking the dirstate, all entries not in the dirstate
2186
# are deletes, so we want to append them to the end as per the design
2187
# discussions. So do a set difference on ids with the parents to
2188
# get deletes, and add them to the end.
2189
# During the update process we need to answer the following questions:
2190
# - find other keys containing a fileid in order to create cross-path
2191
# links. We dont't trivially use the inventory from other trees
2192
# because this leads to either double touching, or to accessing
2194
# - find other keys containing a path
2195
# We accumulate each entry via this dictionary, including the root
2198
# we could do parallel iterators, but because file id data may be
2199
# scattered throughout, we dont save on index overhead: we have to look
2200
# at everything anyway. We can probably save cycles by reusing parent
2201
# data and doing an incremental update when adding an additional
2202
# parent, but for now the common cases are adding a new parent (merge),
2203
# and replacing completely (commit), and commit is more common: so
2204
# optimise merge later.
2206
# ---- start generation of full tree mapping data
2207
# what trees should we use?
2208
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
2209
# how many trees do we end up with
2210
parent_count = len(parent_trees)
2212
# one: the current tree
2213
for entry in self._iter_entries():
2214
# skip entries not in the current tree
2215
if entry[1][0][0] in 'ar': # absent, relocated
2217
by_path[entry[0]] = [entry[1][0]] + \
2218
[DirState.NULL_PARENT_DETAILS] * parent_count
2219
id_index[entry[0][2]] = set([entry[0]])
2221
# now the parent trees:
2222
for tree_index, tree in enumerate(parent_trees):
2223
# the index is off by one, adjust it.
2224
tree_index = tree_index + 1
2225
# when we add new locations for a fileid we need these ranges for
2226
# any fileid in this tree as we set the by_path[id] to:
2227
# already_processed_tree_details + new_details + new_location_suffix
2228
# the suffix is from tree_index+1:parent_count+1.
2229
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
2230
# now stitch in all the entries from this tree
2231
for path, entry in tree.inventory.iter_entries_by_dir():
2232
# here we process each trees details for each item in the tree.
2233
# we first update any existing entries for the id at other paths,
2234
# then we either create or update the entry for the id at the
2235
# right path, and finally we add (if needed) a mapping from
2236
# file_id to this path. We do it in this order to allow us to
2237
# avoid checking all known paths for the id when generating a
2238
# new entry at this path: by adding the id->path mapping last,
2239
# all the mappings are valid and have correct relocation
2240
# records where needed.
2241
file_id = entry.file_id
2242
path_utf8 = path.encode('utf8')
2243
dirname, basename = osutils.split(path_utf8)
2244
new_entry_key = (dirname, basename, file_id)
2245
# tree index consistency: All other paths for this id in this tree
2246
# index must point to the correct path.
2247
for entry_key in id_index.setdefault(file_id, set()):
2248
# TODO:PROFILING: It might be faster to just update
2249
# rather than checking if we need to, and then overwrite
2250
# the one we are located at.
2251
if entry_key != new_entry_key:
2252
# this file id is at a different path in one of the
2253
# other trees, so put absent pointers there
2254
# This is the vertical axis in the matrix, all pointing
2256
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
2257
# by path consistency: Insert into an existing path record (trivial), or
2258
# add a new one with relocation pointers for the other tree indexes.
2259
if new_entry_key in id_index[file_id]:
2260
# there is already an entry where this data belongs, just insert it.
2261
by_path[new_entry_key][tree_index] = \
2262
self._inv_entry_to_details(entry)
2264
# add relocated entries to the horizontal axis - this row
2265
# mapping from path,id. We need to look up the correct path
2266
# for the indexes from 0 to tree_index -1
2268
for lookup_index in xrange(tree_index):
2269
# boundary case: this is the first occurence of file_id
2270
# so there are no id_indexs, possibly take this out of
2272
if not len(id_index[file_id]):
2273
new_details.append(DirState.NULL_PARENT_DETAILS)
2275
# grab any one entry, use it to find the right path.
2276
# TODO: optimise this to reduce memory use in highly
2277
# fragmented situations by reusing the relocation
2279
a_key = iter(id_index[file_id]).next()
2280
if by_path[a_key][lookup_index][0] in ('r', 'a'):
2281
# its a pointer or missing statement, use it as is.
2282
new_details.append(by_path[a_key][lookup_index])
2284
# we have the right key, make a pointer to it.
2285
real_path = ('/'.join(a_key[0:2])).strip('/')
2286
new_details.append(('r', real_path, 0, False, ''))
2287
new_details.append(self._inv_entry_to_details(entry))
2288
new_details.extend(new_location_suffix)
2289
by_path[new_entry_key] = new_details
2290
id_index[file_id].add(new_entry_key)
2291
# --- end generation of full tree mappings
2293
# sort and output all the entries
2294
new_entries = self._sort_entries(by_path.items())
2295
self._entries_to_current_state(new_entries)
2296
self._parents = [rev_id for rev_id, tree in trees]
2297
self._ghosts = list(ghosts)
2298
self._header_state = DirState.IN_MEMORY_MODIFIED
2299
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2300
self._id_index = id_index
2302
def _sort_entries(self, entry_list):
2303
"""Given a list of entries, sort them into the right order.
2305
This is done when constructing a new dirstate from trees - normally we
2306
try to keep everything in sorted blocks all the time, but sometimes
2307
it's easier to sort after the fact.
2310
# sort by: directory parts, file name, file id
2311
return entry[0][0].split('/'), entry[0][1], entry[0][2]
2312
return sorted(entry_list, key=_key)
2314
def set_state_from_inventory(self, new_inv):
2315
"""Set new_inv as the current state.
2317
This API is called by tree transform, and will usually occur with
2318
existing parent trees.
2320
:param new_inv: The inventory object to set current state from.
2322
if 'evil' in debug.debug_flags:
2323
trace.mutter_callsite(1,
2324
"set_state_from_inventory called; please mutate the tree instead")
2325
self._read_dirblocks_if_needed()
2327
# Two iterators: current data and new data, both in dirblock order.
2328
# We zip them together, which tells about entries that are new in the
2329
# inventory, or removed in the inventory, or present in both and
2332
# You might think we could just synthesize a new dirstate directly
2333
# since we're processing it in the right order. However, we need to
2334
# also consider there may be any number of parent trees and relocation
2335
# pointers, and we don't want to duplicate that here.
2336
new_iterator = new_inv.iter_entries_by_dir()
2337
# we will be modifying the dirstate, so we need a stable iterator. In
2338
# future we might write one, for now we just clone the state into a
2339
# list - which is a shallow copy.
2340
old_iterator = iter(list(self._iter_entries()))
2341
# both must have roots so this is safe:
2342
current_new = new_iterator.next()
2343
current_old = old_iterator.next()
2344
def advance(iterator):
2346
return iterator.next()
2347
except StopIteration:
2349
while current_new or current_old:
2350
# skip entries in old that are not really there
2351
if current_old and current_old[1][0][0] in 'ar':
2352
# relocated or absent
2353
current_old = advance(old_iterator)
2356
# convert new into dirblock style
2357
new_path_utf8 = current_new[0].encode('utf8')
2358
new_dirname, new_basename = osutils.split(new_path_utf8)
2359
new_id = current_new[1].file_id
2360
new_entry_key = (new_dirname, new_basename, new_id)
2361
current_new_minikind = \
2362
DirState._kind_to_minikind[current_new[1].kind]
2363
if current_new_minikind == 't':
2364
fingerprint = current_new[1].reference_revision or ''
2366
# We normally only insert or remove records, or update
2367
# them when it has significantly changed. Then we want to
2368
# erase its fingerprint. Unaffected records should
2369
# normally not be updated at all.
2372
# for safety disable variables
2373
new_path_utf8 = new_dirname = new_basename = new_id = \
2374
new_entry_key = None
2375
# 5 cases, we dont have a value that is strictly greater than everything, so
2376
# we make both end conditions explicit
2378
# old is finished: insert current_new into the state.
2379
self.update_minimal(new_entry_key, current_new_minikind,
2380
executable=current_new[1].executable,
2381
path_utf8=new_path_utf8, fingerprint=fingerprint)
2382
current_new = advance(new_iterator)
2383
elif not current_new:
2385
self._make_absent(current_old)
2386
current_old = advance(old_iterator)
2387
elif new_entry_key == current_old[0]:
2388
# same - common case
2389
# We're looking at the same path and id in both the dirstate
2390
# and inventory, so just need to update the fields in the
2391
# dirstate from the one in the inventory.
2392
# TODO: update the record if anything significant has changed.
2393
# the minimal required trigger is if the execute bit or cached
2395
if (current_old[1][0][3] != current_new[1].executable or
2396
current_old[1][0][0] != current_new_minikind):
2397
self.update_minimal(current_old[0], current_new_minikind,
2398
executable=current_new[1].executable,
2399
path_utf8=new_path_utf8, fingerprint=fingerprint)
2400
# both sides are dealt with, move on
2401
current_old = advance(old_iterator)
2402
current_new = advance(new_iterator)
2403
elif (cmp_by_dirs(new_dirname, current_old[0][0]) < 0
2404
or (new_dirname == current_old[0][0]
2405
and new_entry_key[1:] < current_old[0][1:])):
2407
# add a entry for this and advance new
2408
self.update_minimal(new_entry_key, current_new_minikind,
2409
executable=current_new[1].executable,
2410
path_utf8=new_path_utf8, fingerprint=fingerprint)
2411
current_new = advance(new_iterator)
2413
# we've advanced past the place where the old key would be,
2414
# without seeing it in the new list. so it must be gone.
2415
self._make_absent(current_old)
2416
current_old = advance(old_iterator)
2417
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2418
self._id_index = None
2419
self._packed_stat_index = None
2421
def _make_absent(self, current_old):
2422
"""Mark current_old - an entry - as absent for tree 0.
2424
:return: True if this was the last details entry for the entry key:
2425
that is, if the underlying block has had the entry removed, thus
2426
shrinking in length.
2428
# build up paths that this id will be left at after the change is made,
2429
# so we can update their cross references in tree 0
2430
all_remaining_keys = set()
2431
# Dont check the working tree, because it's going.
2432
for details in current_old[1][1:]:
2433
if details[0] not in 'ar': # absent, relocated
2434
all_remaining_keys.add(current_old[0])
2435
elif details[0] == 'r': # relocated
2436
# record the key for the real path.
2437
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
2438
# absent rows are not present at any path.
2439
last_reference = current_old[0] not in all_remaining_keys
2441
# the current row consists entire of the current item (being marked
2442
# absent), and relocated or absent entries for the other trees:
2443
# Remove it, its meaningless.
2444
block = self._find_block(current_old[0])
2445
entry_index, present = self._find_entry_index(current_old[0], block[1])
2447
raise AssertionError('could not find entry for %s' % (current_old,))
2448
block[1].pop(entry_index)
2449
# if we have an id_index in use, remove this key from it for this id.
2450
if self._id_index is not None:
2451
self._id_index[current_old[0][2]].remove(current_old[0])
2452
# update all remaining keys for this id to record it as absent. The
2453
# existing details may either be the record we are marking as deleted
2454
# (if there were other trees with the id present at this path), or may
2456
for update_key in all_remaining_keys:
2457
update_block_index, present = \
2458
self._find_block_index_from_key(update_key)
2460
raise AssertionError('could not find block for %s' % (update_key,))
2461
update_entry_index, present = \
2462
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
2464
raise AssertionError('could not find entry for %s' % (update_key,))
2465
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
2466
# it must not be absent at the moment
2467
if update_tree_details[0][0] == 'a': # absent
2468
raise AssertionError('bad row %r' % (update_tree_details,))
2469
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
2470
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2471
return last_reference
2473
def update_minimal(self, key, minikind, executable=False, fingerprint='',
2474
packed_stat=None, size=0, path_utf8=None):
2475
"""Update an entry to the state in tree 0.
2477
This will either create a new entry at 'key' or update an existing one.
2478
It also makes sure that any other records which might mention this are
2481
:param key: (dir, name, file_id) for the new entry
2482
:param minikind: The type for the entry ('f' == 'file', 'd' ==
2484
:param executable: Should the executable bit be set?
2485
:param fingerprint: Simple fingerprint for new entry: sha1 for files,
2486
referenced revision id for subtrees, etc.
2487
:param packed_stat: Packed stat value for new entry.
2488
:param size: Size information for new entry
2489
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
2492
If packed_stat and fingerprint are not given, they're invalidated in
2495
block = self._find_block(key)[1]
2496
if packed_stat is None:
2497
packed_stat = DirState.NULLSTAT
2498
# XXX: Some callers pass '' as the packed_stat, and it seems to be
2499
# sometimes present in the dirstate - this seems oddly inconsistent.
2501
entry_index, present = self._find_entry_index(key, block)
2502
new_details = (minikind, fingerprint, size, executable, packed_stat)
2503
id_index = self._get_id_index()
2505
# new entry, synthesis cross reference here,
2506
existing_keys = id_index.setdefault(key[2], set())
2507
if not existing_keys:
2508
# not currently in the state, simplest case
2509
new_entry = key, [new_details] + self._empty_parent_info()
2511
# present at one or more existing other paths.
2512
# grab one of them and use it to generate parent
2513
# relocation/absent entries.
2514
new_entry = key, [new_details]
2515
for other_key in existing_keys:
2516
# change the record at other to be a pointer to this new
2517
# record. The loop looks similar to the change to
2518
# relocations when updating an existing record but its not:
2519
# the test for existing kinds is different: this can be
2520
# factored out to a helper though.
2521
other_block_index, present = self._find_block_index_from_key(other_key)
2523
raise AssertionError('could not find block for %s' % (other_key,))
2524
other_entry_index, present = self._find_entry_index(other_key,
2525
self._dirblocks[other_block_index][1])
2527
raise AssertionError('could not find entry for %s' % (other_key,))
2528
if path_utf8 is None:
2529
raise AssertionError('no path')
2530
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
2531
('r', path_utf8, 0, False, '')
2533
num_present_parents = self._num_present_parents()
2534
for lookup_index in xrange(1, num_present_parents + 1):
2535
# grab any one entry, use it to find the right path.
2536
# TODO: optimise this to reduce memory use in highly
2537
# fragmented situations by reusing the relocation
2539
update_block_index, present = \
2540
self._find_block_index_from_key(other_key)
2542
raise AssertionError('could not find block for %s' % (other_key,))
2543
update_entry_index, present = \
2544
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
2546
raise AssertionError('could not find entry for %s' % (other_key,))
2547
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
2548
if update_details[0] in 'ar': # relocated, absent
2549
# its a pointer or absent in lookup_index's tree, use
2551
new_entry[1].append(update_details)
2553
# we have the right key, make a pointer to it.
2554
pointer_path = osutils.pathjoin(*other_key[0:2])
2555
new_entry[1].append(('r', pointer_path, 0, False, ''))
2556
block.insert(entry_index, new_entry)
2557
existing_keys.add(key)
2559
# Does the new state matter?
2560
block[entry_index][1][0] = new_details
2561
# parents cannot be affected by what we do.
2562
# other occurences of this id can be found
2563
# from the id index.
2565
# tree index consistency: All other paths for this id in this tree
2566
# index must point to the correct path. We have to loop here because
2567
# we may have passed entries in the state with this file id already
2568
# that were absent - where parent entries are - and they need to be
2569
# converted to relocated.
2570
if path_utf8 is None:
2571
raise AssertionError('no path')
2572
for entry_key in id_index.setdefault(key[2], set()):
2573
# TODO:PROFILING: It might be faster to just update
2574
# rather than checking if we need to, and then overwrite
2575
# the one we are located at.
2576
if entry_key != key:
2577
# this file id is at a different path in one of the
2578
# other trees, so put absent pointers there
2579
# This is the vertical axis in the matrix, all pointing
2581
block_index, present = self._find_block_index_from_key(entry_key)
2583
raise AssertionError('not present: %r', entry_key)
2584
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
2586
raise AssertionError('not present: %r', entry_key)
2587
self._dirblocks[block_index][1][entry_index][1][0] = \
2588
('r', path_utf8, 0, False, '')
2589
# add a containing dirblock if needed.
2590
if new_details[0] == 'd':
2591
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
2592
block_index, present = self._find_block_index_from_key(subdir_key)
2594
self._dirblocks.insert(block_index, (subdir_key[0], []))
2596
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2598
def _validate(self):
2599
"""Check that invariants on the dirblock are correct.
2601
This can be useful in debugging; it shouldn't be necessary in
2604
This must be called with a lock held.
2606
# NOTE: This must always raise AssertionError not just assert,
2607
# otherwise it may not behave properly under python -O
2609
# TODO: All entries must have some content that's not 'a' or 'r',
2610
# otherwise it could just be removed.
2612
# TODO: All relocations must point directly to a real entry.
2614
# TODO: No repeated keys.
2617
from pprint import pformat
2618
self._read_dirblocks_if_needed()
2619
if len(self._dirblocks) > 0:
2620
if not self._dirblocks[0][0] == '':
2621
raise AssertionError(
2622
"dirblocks don't start with root block:\n" + \
2623
pformat(self._dirblocks))
2624
if len(self._dirblocks) > 1:
2625
if not self._dirblocks[1][0] == '':
2626
raise AssertionError(
2627
"dirblocks missing root directory:\n" + \
2628
pformat(self._dirblocks))
2629
# the dirblocks are sorted by their path components, name, and dir id
2630
dir_names = [d[0].split('/')
2631
for d in self._dirblocks[1:]]
2632
if dir_names != sorted(dir_names):
2633
raise AssertionError(
2634
"dir names are not in sorted order:\n" + \
2635
pformat(self._dirblocks) + \
2638
for dirblock in self._dirblocks:
2639
# within each dirblock, the entries are sorted by filename and
2641
for entry in dirblock[1]:
2642
if dirblock[0] != entry[0][0]:
2643
raise AssertionError(
2645
"doesn't match directory name in\n%r" %
2646
(entry, pformat(dirblock)))
2647
if dirblock[1] != sorted(dirblock[1]):
2648
raise AssertionError(
2649
"dirblock for %r is not sorted:\n%s" % \
2650
(dirblock[0], pformat(dirblock)))
2652
def check_valid_parent():
2653
"""Check that the current entry has a valid parent.
2655
This makes sure that the parent has a record,
2656
and that the parent isn't marked as "absent" in the
2657
current tree. (It is invalid to have a non-absent file in an absent
2660
if entry[0][0:2] == ('', ''):
2661
# There should be no parent for the root row
2663
parent_entry = self._get_entry(tree_index, path_utf8=entry[0][0])
2664
if parent_entry == (None, None):
2665
raise AssertionError(
2666
"no parent entry for: %s in tree %s"
2667
% (this_path, tree_index))
2668
if parent_entry[1][tree_index][0] != 'd':
2669
raise AssertionError(
2670
"Parent entry for %s is not marked as a valid"
2671
" directory. %s" % (this_path, parent_entry,))
2673
# For each file id, for each tree: either
2674
# the file id is not present at all; all rows with that id in the
2675
# key have it marked as 'absent'
2676
# OR the file id is present under exactly one name; any other entries
2677
# that mention that id point to the correct name.
2679
# We check this with a dict per tree pointing either to the present
2680
# name, or None if absent.
2681
tree_count = self._num_present_parents() + 1
2682
id_path_maps = [dict() for i in range(tree_count)]
2683
# Make sure that all renamed entries point to the correct location.
2684
for entry in self._iter_entries():
2685
file_id = entry[0][2]
2686
this_path = osutils.pathjoin(entry[0][0], entry[0][1])
2687
if len(entry[1]) != tree_count:
2688
raise AssertionError(
2689
"wrong number of entry details for row\n%s" \
2690
",\nexpected %d" % \
2691
(pformat(entry), tree_count))
2692
absent_positions = 0
2693
for tree_index, tree_state in enumerate(entry[1]):
2694
this_tree_map = id_path_maps[tree_index]
2695
minikind = tree_state[0]
2696
if minikind in 'ar':
2697
absent_positions += 1
2698
# have we seen this id before in this column?
2699
if file_id in this_tree_map:
2700
previous_path, previous_loc = this_tree_map[file_id]
2701
# any later mention of this file must be consistent with
2702
# what was said before
2704
if previous_path is not None:
2705
raise AssertionError(
2706
"file %s is absent in row %r but also present " \
2708
(file_id, entry, previous_path))
2709
elif minikind == 'r':
2710
target_location = tree_state[1]
2711
if previous_path != target_location:
2712
raise AssertionError(
2713
"file %s relocation in row %r but also at %r" \
2714
% (file_id, entry, previous_path))
2716
# a file, directory, etc - may have been previously
2717
# pointed to by a relocation, which must point here
2718
if previous_path != this_path:
2719
raise AssertionError(
2720
"entry %r inconsistent with previous path %r "
2722
(entry, previous_path, previous_loc))
2723
check_valid_parent()
2726
# absent; should not occur anywhere else
2727
this_tree_map[file_id] = None, this_path
2728
elif minikind == 'r':
2729
# relocation, must occur at expected location
2730
this_tree_map[file_id] = tree_state[1], this_path
2732
this_tree_map[file_id] = this_path, this_path
2733
check_valid_parent()
2734
if absent_positions == tree_count:
2735
raise AssertionError(
2736
"entry %r has no data for any tree." % (entry,))
2738
def _wipe_state(self):
2739
"""Forget all state information about the dirstate."""
2740
self._header_state = DirState.NOT_IN_MEMORY
2741
self._dirblock_state = DirState.NOT_IN_MEMORY
2742
self._changes_aborted = False
2745
self._dirblocks = []
2746
self._id_index = None
2747
self._packed_stat_index = None
2748
self._end_of_header = None
2749
self._cutoff_time = None
2750
self._split_path_cache = {}
2752
def lock_read(self):
2753
"""Acquire a read lock on the dirstate."""
2754
if self._lock_token is not None:
2755
raise errors.LockContention(self._lock_token)
2756
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2757
# already in memory, we could read just the header and check for
2758
# any modification. If not modified, we can just leave things
2760
self._lock_token = lock.ReadLock(self._filename)
2761
self._lock_state = 'r'
2762
self._state_file = self._lock_token.f
2765
def lock_write(self):
2766
"""Acquire a write lock on the dirstate."""
2767
if self._lock_token is not None:
2768
raise errors.LockContention(self._lock_token)
2769
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2770
# already in memory, we could read just the header and check for
2771
# any modification. If not modified, we can just leave things
2773
self._lock_token = lock.WriteLock(self._filename)
2774
self._lock_state = 'w'
2775
self._state_file = self._lock_token.f
2779
"""Drop any locks held on the dirstate."""
2780
if self._lock_token is None:
2781
raise errors.LockNotHeld(self)
2782
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2783
# already in memory, we could read just the header and check for
2784
# any modification. If not modified, we can just leave things
2786
self._state_file = None
2787
self._lock_state = None
2788
self._lock_token.unlock()
2789
self._lock_token = None
2790
self._split_path_cache = {}
2792
def _requires_lock(self):
2793
"""Check that a lock is currently held by someone on the dirstate."""
2794
if not self._lock_token:
2795
raise errors.ObjectNotLocked(self)
2798
def py_update_entry(state, entry, abspath, stat_value,
2799
_stat_to_minikind=DirState._stat_to_minikind,
2800
_pack_stat=pack_stat):
2801
"""Update the entry based on what is actually on disk.
2803
This function only calculates the sha if it needs to - if the entry is
2804
uncachable, or clearly different to the first parent's entry, no sha
2805
is calculated, and None is returned.
2807
:param state: The dirstate this entry is in.
2808
:param entry: This is the dirblock entry for the file in question.
2809
:param abspath: The path on disk for this file.
2810
:param stat_value: The stat value done on the path.
2811
:return: None, or The sha1 hexdigest of the file (40 bytes) or link
2812
target of a symlink.
2815
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
2819
packed_stat = _pack_stat(stat_value)
2820
(saved_minikind, saved_link_or_sha1, saved_file_size,
2821
saved_executable, saved_packed_stat) = entry[1][0]
2823
if minikind == 'd' and saved_minikind == 't':
2825
if (minikind == saved_minikind
2826
and packed_stat == saved_packed_stat):
2827
# The stat hasn't changed since we saved, so we can re-use the
2832
# size should also be in packed_stat
2833
if saved_file_size == stat_value.st_size:
2834
return saved_link_or_sha1
2836
# If we have gotten this far, that means that we need to actually
2837
# process this entry.
2840
executable = state._is_executable(stat_value.st_mode,
2842
if state._cutoff_time is None:
2843
state._sha_cutoff_time()
2844
if (stat_value.st_mtime < state._cutoff_time
2845
and stat_value.st_ctime < state._cutoff_time
2846
and len(entry[1]) > 1
2847
and entry[1][1][0] != 'a'):
2848
# Could check for size changes for further optimised
2849
# avoidance of sha1's. However the most prominent case of
2850
# over-shaing is during initial add, which this catches.
2851
link_or_sha1 = state._sha1_file(abspath)
2852
entry[1][0] = ('f', link_or_sha1, stat_value.st_size,
2853
executable, packed_stat)
2855
entry[1][0] = ('f', '', stat_value.st_size,
2856
executable, DirState.NULLSTAT)
2857
elif minikind == 'd':
2859
entry[1][0] = ('d', '', 0, False, packed_stat)
2860
if saved_minikind != 'd':
2861
# This changed from something into a directory. Make sure we
2862
# have a directory block for it. This doesn't happen very
2863
# often, so this doesn't have to be super fast.
2864
block_index, entry_index, dir_present, file_present = \
2865
state._get_block_entry_index(entry[0][0], entry[0][1], 0)
2866
state._ensure_block(block_index, entry_index,
2867
osutils.pathjoin(entry[0][0], entry[0][1]))
2868
elif minikind == 'l':
2869
link_or_sha1 = state._read_link(abspath, saved_link_or_sha1)
2870
if state._cutoff_time is None:
2871
state._sha_cutoff_time()
2872
if (stat_value.st_mtime < state._cutoff_time
2873
and stat_value.st_ctime < state._cutoff_time):
2874
entry[1][0] = ('l', link_or_sha1, stat_value.st_size,
2877
entry[1][0] = ('l', '', stat_value.st_size,
2878
False, DirState.NULLSTAT)
2879
state._dirblock_state = DirState.IN_MEMORY_MODIFIED
2881
update_entry = py_update_entry
2884
class ProcessEntryPython(object):
2886
__slots__ = ["old_dirname_to_file_id", "new_dirname_to_file_id", "uninteresting",
2887
"last_source_parent", "last_target_parent", "include_unchanged",
2888
"use_filesystem_for_exec", "utf8_decode", "searched_specific_files",
2889
"search_specific_files", "state", "source_index", "target_index",
2890
"want_unversioned", "tree"]
2892
def __init__(self, include_unchanged, use_filesystem_for_exec,
2893
search_specific_files, state, source_index, target_index,
2894
want_unversioned, tree):
2895
self.old_dirname_to_file_id = {}
2896
self.new_dirname_to_file_id = {}
2897
# Just a sentry, so that _process_entry can say that this
2898
# record is handled, but isn't interesting to process (unchanged)
2899
self.uninteresting = object()
2900
# Using a list so that we can access the values and change them in
2901
# nested scope. Each one is [path, file_id, entry]
2902
self.last_source_parent = [None, None]
2903
self.last_target_parent = [None, None]
2904
self.include_unchanged = include_unchanged
2905
self.use_filesystem_for_exec = use_filesystem_for_exec
2906
self.utf8_decode = cache_utf8._utf8_decode
2907
# for all search_indexs in each path at or under each element of
2908
# search_specific_files, if the detail is relocated: add the id, and add the
2909
# relocated path as one to search if its not searched already. If the
2910
# detail is not relocated, add the id.
2911
self.searched_specific_files = set()
2912
self.search_specific_files = search_specific_files
2914
self.source_index = source_index
2915
self.target_index = target_index
2916
self.want_unversioned = want_unversioned
2919
def _process_entry(self, entry, path_info, pathjoin=osutils.pathjoin):
2920
"""Compare an entry and real disk to generate delta information.
2922
:param path_info: top_relpath, basename, kind, lstat, abspath for
2923
the path of entry. If None, then the path is considered absent.
2924
(Perhaps we should pass in a concrete entry for this ?)
2925
Basename is returned as a utf8 string because we expect this
2926
tuple will be ignored, and don't want to take the time to
2928
:return: None if these don't match
2929
A tuple of information about the change, or
2930
the object 'uninteresting' if these match, but are
2931
basically identical.
2933
if self.source_index is None:
2934
source_details = DirState.NULL_PARENT_DETAILS
2936
source_details = entry[1][self.source_index]
2937
target_details = entry[1][self.target_index]
2938
target_minikind = target_details[0]
2939
if path_info is not None and target_minikind in 'fdlt':
2940
if not (self.target_index == 0):
2941
raise AssertionError()
2942
link_or_sha1 = update_entry(self.state, entry,
2943
abspath=path_info[4], stat_value=path_info[3])
2944
# The entry may have been modified by update_entry
2945
target_details = entry[1][self.target_index]
2946
target_minikind = target_details[0]
2949
file_id = entry[0][2]
2950
source_minikind = source_details[0]
2951
if source_minikind in 'fdltr' and target_minikind in 'fdlt':
2952
# claimed content in both: diff
2953
# r | fdlt | | add source to search, add id path move and perform
2954
# | | | diff check on source-target
2955
# r | fdlt | a | dangling file that was present in the basis.
2957
if source_minikind in 'r':
2958
# add the source to the search path to find any children it
2959
# has. TODO ? : only add if it is a container ?
2960
if not osutils.is_inside_any(self.searched_specific_files,
2962
self.search_specific_files.add(source_details[1])
2963
# generate the old path; this is needed for stating later
2965
old_path = source_details[1]
2966
old_dirname, old_basename = os.path.split(old_path)
2967
path = pathjoin(entry[0][0], entry[0][1])
2968
old_entry = self.state._get_entry(self.source_index,
2970
# update the source details variable to be the real
2972
if old_entry == (None, None):
2973
raise errors.CorruptDirstate(self.state._filename,
2974
"entry '%s/%s' is considered renamed from %r"
2975
" but source does not exist\n"
2976
"entry: %s" % (entry[0][0], entry[0][1], old_path, entry))
2977
source_details = old_entry[1][self.source_index]
2978
source_minikind = source_details[0]
2980
old_dirname = entry[0][0]
2981
old_basename = entry[0][1]
2982
old_path = path = None
2983
if path_info is None:
2984
# the file is missing on disk, show as removed.
2985
content_change = True
2989
# source and target are both versioned and disk file is present.
2990
target_kind = path_info[2]
2991
if target_kind == 'directory':
2993
old_path = path = pathjoin(old_dirname, old_basename)
2994
self.new_dirname_to_file_id[path] = file_id
2995
if source_minikind != 'd':
2996
content_change = True
2998
# directories have no fingerprint
2999
content_change = False
3001
elif target_kind == 'file':
3002
if source_minikind != 'f':
3003
content_change = True
3005
# If the size is the same, check the sha:
3006
if target_details[2] == source_details[2]:
3007
if link_or_sha1 is None:
3009
file_obj = file(path_info[4], 'rb')
3011
statvalue = os.fstat(file_obj.fileno())
3012
link_or_sha1 = osutils.sha_file(file_obj)
3015
self.state._observed_sha1(entry, link_or_sha1,
3017
content_change = (link_or_sha1 != source_details[1])
3019
# Size changed, so must be different
3020
content_change = True
3021
# Target details is updated at update_entry time
3022
if self.use_filesystem_for_exec:
3023
# We don't need S_ISREG here, because we are sure
3024
# we are dealing with a file.
3025
target_exec = bool(stat.S_IEXEC & path_info[3].st_mode)
3027
target_exec = target_details[3]
3028
elif target_kind == 'symlink':
3029
if source_minikind != 'l':
3030
content_change = True
3032
content_change = (link_or_sha1 != source_details[1])
3034
elif target_kind == 'tree-reference':
3035
if source_minikind != 't':
3036
content_change = True
3038
content_change = False
3041
raise Exception, "unknown kind %s" % path_info[2]
3042
if source_minikind == 'd':
3044
old_path = path = pathjoin(old_dirname, old_basename)
3045
self.old_dirname_to_file_id[old_path] = file_id
3046
# parent id is the entry for the path in the target tree
3047
if old_dirname == self.last_source_parent[0]:
3048
source_parent_id = self.last_source_parent[1]
3051
source_parent_id = self.old_dirname_to_file_id[old_dirname]
3053
source_parent_entry = self.state._get_entry(self.source_index,
3054
path_utf8=old_dirname)
3055
source_parent_id = source_parent_entry[0][2]
3056
if source_parent_id == entry[0][2]:
3057
# This is the root, so the parent is None
3058
source_parent_id = None
3060
self.last_source_parent[0] = old_dirname
3061
self.last_source_parent[1] = source_parent_id
3062
new_dirname = entry[0][0]
3063
if new_dirname == self.last_target_parent[0]:
3064
target_parent_id = self.last_target_parent[1]
3067
target_parent_id = self.new_dirname_to_file_id[new_dirname]
3069
# TODO: We don't always need to do the lookup, because the
3070
# parent entry will be the same as the source entry.
3071
target_parent_entry = self.state._get_entry(self.target_index,
3072
path_utf8=new_dirname)
3073
if target_parent_entry == (None, None):
3074
raise AssertionError(
3075
"Could not find target parent in wt: %s\nparent of: %s"
3076
% (new_dirname, entry))
3077
target_parent_id = target_parent_entry[0][2]
3078
if target_parent_id == entry[0][2]:
3079
# This is the root, so the parent is None
3080
target_parent_id = None
3082
self.last_target_parent[0] = new_dirname
3083
self.last_target_parent[1] = target_parent_id
3085
source_exec = source_details[3]
3086
if (self.include_unchanged
3088
or source_parent_id != target_parent_id
3089
or old_basename != entry[0][1]
3090
or source_exec != target_exec
3092
if old_path is None:
3093
old_path = path = pathjoin(old_dirname, old_basename)
3094
old_path_u = self.utf8_decode(old_path)[0]
3097
old_path_u = self.utf8_decode(old_path)[0]
3098
if old_path == path:
3101
path_u = self.utf8_decode(path)[0]
3102
source_kind = DirState._minikind_to_kind[source_minikind]
3103
return (entry[0][2],
3104
(old_path_u, path_u),
3107
(source_parent_id, target_parent_id),
3108
(self.utf8_decode(old_basename)[0], self.utf8_decode(entry[0][1])[0]),
3109
(source_kind, target_kind),
3110
(source_exec, target_exec))
3112
return self.uninteresting
3113
elif source_minikind in 'a' and target_minikind in 'fdlt':
3114
# looks like a new file
3115
path = pathjoin(entry[0][0], entry[0][1])
3116
# parent id is the entry for the path in the target tree
3117
# TODO: these are the same for an entire directory: cache em.
3118
parent_id = self.state._get_entry(self.target_index,
3119
path_utf8=entry[0][0])[0][2]
3120
if parent_id == entry[0][2]:
3122
if path_info is not None:
3124
if self.use_filesystem_for_exec:
3125
# We need S_ISREG here, because we aren't sure if this
3128
stat.S_ISREG(path_info[3].st_mode)
3129
and stat.S_IEXEC & path_info[3].st_mode)
3131
target_exec = target_details[3]
3132
return (entry[0][2],
3133
(None, self.utf8_decode(path)[0]),
3137
(None, self.utf8_decode(entry[0][1])[0]),
3138
(None, path_info[2]),
3139
(None, target_exec))
3141
# Its a missing file, report it as such.
3142
return (entry[0][2],
3143
(None, self.utf8_decode(path)[0]),
3147
(None, self.utf8_decode(entry[0][1])[0]),
3150
elif source_minikind in 'fdlt' and target_minikind in 'a':
3151
# unversioned, possibly, or possibly not deleted: we dont care.
3152
# if its still on disk, *and* theres no other entry at this
3153
# path [we dont know this in this routine at the moment -
3154
# perhaps we should change this - then it would be an unknown.
3155
old_path = pathjoin(entry[0][0], entry[0][1])
3156
# parent id is the entry for the path in the target tree
3157
parent_id = self.state._get_entry(self.source_index, path_utf8=entry[0][0])[0][2]
3158
if parent_id == entry[0][2]:
3160
return (entry[0][2],
3161
(self.utf8_decode(old_path)[0], None),
3165
(self.utf8_decode(entry[0][1])[0], None),
3166
(DirState._minikind_to_kind[source_minikind], None),
3167
(source_details[3], None))
3168
elif source_minikind in 'fdlt' and target_minikind in 'r':
3169
# a rename; could be a true rename, or a rename inherited from
3170
# a renamed parent. TODO: handle this efficiently. Its not
3171
# common case to rename dirs though, so a correct but slow
3172
# implementation will do.
3173
if not osutils.is_inside_any(self.searched_specific_files, target_details[1]):
3174
self.search_specific_files.add(target_details[1])
3175
elif source_minikind in 'ra' and target_minikind in 'ra':
3176
# neither of the selected trees contain this file,
3177
# so skip over it. This is not currently directly tested, but
3178
# is indirectly via test_too_much.TestCommands.test_conflicts.
3181
raise AssertionError("don't know how to compare "
3182
"source_minikind=%r, target_minikind=%r"
3183
% (source_minikind, target_minikind))
3184
## import pdb;pdb.set_trace()
3190
def iter_changes(self):
3191
"""Iterate over the changes."""
3192
utf8_decode = cache_utf8._utf8_decode
3193
_cmp_by_dirs = cmp_by_dirs
3194
_process_entry = self._process_entry
3195
uninteresting = self.uninteresting
3196
search_specific_files = self.search_specific_files
3197
searched_specific_files = self.searched_specific_files
3198
splitpath = osutils.splitpath
3200
# compare source_index and target_index at or under each element of search_specific_files.
3201
# follow the following comparison table. Note that we only want to do diff operations when
3202
# the target is fdl because thats when the walkdirs logic will have exposed the pathinfo
3206
# Source | Target | disk | action
3207
# r | fdlt | | add source to search, add id path move and perform
3208
# | | | diff check on source-target
3209
# r | fdlt | a | dangling file that was present in the basis.
3211
# r | a | | add source to search
3213
# r | r | | this path is present in a non-examined tree, skip.
3214
# r | r | a | this path is present in a non-examined tree, skip.
3215
# a | fdlt | | add new id
3216
# a | fdlt | a | dangling locally added file, skip
3217
# a | a | | not present in either tree, skip
3218
# a | a | a | not present in any tree, skip
3219
# a | r | | not present in either tree at this path, skip as it
3220
# | | | may not be selected by the users list of paths.
3221
# a | r | a | not present in either tree at this path, skip as it
3222
# | | | may not be selected by the users list of paths.
3223
# fdlt | fdlt | | content in both: diff them
3224
# fdlt | fdlt | a | deleted locally, but not unversioned - show as deleted ?
3225
# fdlt | a | | unversioned: output deleted id for now
3226
# fdlt | a | a | unversioned and deleted: output deleted id
3227
# fdlt | r | | relocated in this tree, so add target to search.
3228
# | | | Dont diff, we will see an r,fd; pair when we reach
3229
# | | | this id at the other path.
3230
# fdlt | r | a | relocated in this tree, so add target to search.
3231
# | | | Dont diff, we will see an r,fd; pair when we reach
3232
# | | | this id at the other path.
3234
# TODO: jam 20070516 - Avoid the _get_entry lookup overhead by
3235
# keeping a cache of directories that we have seen.
3237
while search_specific_files:
3238
# TODO: the pending list should be lexically sorted? the
3239
# interface doesn't require it.
3240
current_root = search_specific_files.pop()
3241
current_root_unicode = current_root.decode('utf8')
3242
searched_specific_files.add(current_root)
3243
# process the entries for this containing directory: the rest will be
3244
# found by their parents recursively.
3245
root_entries = self.state._entries_for_path(current_root)
3246
root_abspath = self.tree.abspath(current_root_unicode)
3248
root_stat = os.lstat(root_abspath)
3250
if e.errno == errno.ENOENT:
3251
# the path does not exist: let _process_entry know that.
3252
root_dir_info = None
3254
# some other random error: hand it up.
3257
root_dir_info = ('', current_root,
3258
osutils.file_kind_from_stat_mode(root_stat.st_mode), root_stat,
3260
if root_dir_info[2] == 'directory':
3261
if self.tree._directory_is_tree_reference(
3262
current_root.decode('utf8')):
3263
root_dir_info = root_dir_info[:2] + \
3264
('tree-reference',) + root_dir_info[3:]
3266
if not root_entries and not root_dir_info:
3267
# this specified path is not present at all, skip it.
3269
path_handled = False
3270
for entry in root_entries:
3271
result = _process_entry(entry, root_dir_info)
3272
if result is not None:
3274
if result is not uninteresting:
3276
if self.want_unversioned and not path_handled and root_dir_info:
3277
new_executable = bool(
3278
stat.S_ISREG(root_dir_info[3].st_mode)
3279
and stat.S_IEXEC & root_dir_info[3].st_mode)
3281
(None, current_root_unicode),
3285
(None, splitpath(current_root_unicode)[-1]),
3286
(None, root_dir_info[2]),
3287
(None, new_executable)
3289
initial_key = (current_root, '', '')
3290
block_index, _ = self.state._find_block_index_from_key(initial_key)
3291
if block_index == 0:
3292
# we have processed the total root already, but because the
3293
# initial key matched it we should skip it here.
3295
if root_dir_info and root_dir_info[2] == 'tree-reference':
3296
current_dir_info = None
3298
dir_iterator = osutils._walkdirs_utf8(root_abspath, prefix=current_root)
3300
current_dir_info = dir_iterator.next()
3302
# on win32, python2.4 has e.errno == ERROR_DIRECTORY, but
3303
# python 2.5 has e.errno == EINVAL,
3304
# and e.winerror == ERROR_DIRECTORY
3305
e_winerror = getattr(e, 'winerror', None)
3306
win_errors = (ERROR_DIRECTORY, ERROR_PATH_NOT_FOUND)
3307
# there may be directories in the inventory even though
3308
# this path is not a file on disk: so mark it as end of
3310
if e.errno in (errno.ENOENT, errno.ENOTDIR, errno.EINVAL):
3311
current_dir_info = None
3312
elif (sys.platform == 'win32'
3313
and (e.errno in win_errors
3314
or e_winerror in win_errors)):
3315
current_dir_info = None
3319
if current_dir_info[0][0] == '':
3320
# remove .bzr from iteration
3321
bzr_index = bisect.bisect_left(current_dir_info[1], ('.bzr',))
3322
if current_dir_info[1][bzr_index][0] != '.bzr':
3323
raise AssertionError()
3324
del current_dir_info[1][bzr_index]
3325
# walk until both the directory listing and the versioned metadata
3327
if (block_index < len(self.state._dirblocks) and
3328
osutils.is_inside(current_root, self.state._dirblocks[block_index][0])):
3329
current_block = self.state._dirblocks[block_index]
3331
current_block = None
3332
while (current_dir_info is not None or
3333
current_block is not None):
3334
if (current_dir_info and current_block
3335
and current_dir_info[0][0] != current_block[0]):
3336
if _cmp_by_dirs(current_dir_info[0][0], current_block[0]) < 0:
3337
# filesystem data refers to paths not covered by the dirblock.
3338
# this has two possibilities:
3339
# A) it is versioned but empty, so there is no block for it
3340
# B) it is not versioned.
3342
# if (A) then we need to recurse into it to check for
3343
# new unknown files or directories.
3344
# if (B) then we should ignore it, because we don't
3345
# recurse into unknown directories.
3347
while path_index < len(current_dir_info[1]):
3348
current_path_info = current_dir_info[1][path_index]
3349
if self.want_unversioned:
3350
if current_path_info[2] == 'directory':
3351
if self.tree._directory_is_tree_reference(
3352
current_path_info[0].decode('utf8')):
3353
current_path_info = current_path_info[:2] + \
3354
('tree-reference',) + current_path_info[3:]
3355
new_executable = bool(
3356
stat.S_ISREG(current_path_info[3].st_mode)
3357
and stat.S_IEXEC & current_path_info[3].st_mode)
3359
(None, utf8_decode(current_path_info[0])[0]),
3363
(None, utf8_decode(current_path_info[1])[0]),
3364
(None, current_path_info[2]),
3365
(None, new_executable))
3366
# dont descend into this unversioned path if it is
3368
if current_path_info[2] in ('directory',
3370
del current_dir_info[1][path_index]
3374
# This dir info has been handled, go to the next
3376
current_dir_info = dir_iterator.next()
3377
except StopIteration:
3378
current_dir_info = None
3380
# We have a dirblock entry for this location, but there
3381
# is no filesystem path for this. This is most likely
3382
# because a directory was removed from the disk.
3383
# We don't have to report the missing directory,
3384
# because that should have already been handled, but we
3385
# need to handle all of the files that are contained
3387
for current_entry in current_block[1]:
3388
# entry referring to file not present on disk.
3389
# advance the entry only, after processing.
3390
result = _process_entry(current_entry, None)
3391
if result is not None:
3392
if result is not uninteresting:
3395
if (block_index < len(self.state._dirblocks) and
3396
osutils.is_inside(current_root,
3397
self.state._dirblocks[block_index][0])):
3398
current_block = self.state._dirblocks[block_index]
3400
current_block = None
3403
if current_block and entry_index < len(current_block[1]):
3404
current_entry = current_block[1][entry_index]
3406
current_entry = None
3407
advance_entry = True
3409
if current_dir_info and path_index < len(current_dir_info[1]):
3410
current_path_info = current_dir_info[1][path_index]
3411
if current_path_info[2] == 'directory':
3412
if self.tree._directory_is_tree_reference(
3413
current_path_info[0].decode('utf8')):
3414
current_path_info = current_path_info[:2] + \
3415
('tree-reference',) + current_path_info[3:]
3417
current_path_info = None
3419
path_handled = False
3420
while (current_entry is not None or
3421
current_path_info is not None):
3422
if current_entry is None:
3423
# the check for path_handled when the path is advanced
3424
# will yield this path if needed.
3426
elif current_path_info is None:
3427
# no path is fine: the per entry code will handle it.
3428
result = _process_entry(current_entry, current_path_info)
3429
if result is not None:
3430
if result is not uninteresting:
3432
elif (current_entry[0][1] != current_path_info[1]
3433
or current_entry[1][self.target_index][0] in 'ar'):
3434
# The current path on disk doesn't match the dirblock
3435
# record. Either the dirblock is marked as absent, or
3436
# the file on disk is not present at all in the
3437
# dirblock. Either way, report about the dirblock
3438
# entry, and let other code handle the filesystem one.
3440
# Compare the basename for these files to determine
3442
if current_path_info[1] < current_entry[0][1]:
3443
# extra file on disk: pass for now, but only
3444
# increment the path, not the entry
3445
advance_entry = False
3447
# entry referring to file not present on disk.
3448
# advance the entry only, after processing.
3449
result = _process_entry(current_entry, None)
3450
if result is not None:
3451
if result is not uninteresting:
3453
advance_path = False
3455
result = _process_entry(current_entry, current_path_info)
3456
if result is not None:
3458
if result is not uninteresting:
3460
if advance_entry and current_entry is not None:
3462
if entry_index < len(current_block[1]):
3463
current_entry = current_block[1][entry_index]
3465
current_entry = None
3467
advance_entry = True # reset the advance flaga
3468
if advance_path and current_path_info is not None:
3469
if not path_handled:
3470
# unversioned in all regards
3471
if self.want_unversioned:
3472
new_executable = bool(
3473
stat.S_ISREG(current_path_info[3].st_mode)
3474
and stat.S_IEXEC & current_path_info[3].st_mode)
3476
relpath_unicode = utf8_decode(current_path_info[0])[0]
3477
except UnicodeDecodeError:
3478
raise errors.BadFilenameEncoding(
3479
current_path_info[0], osutils._fs_enc)
3481
(None, relpath_unicode),
3485
(None, utf8_decode(current_path_info[1])[0]),
3486
(None, current_path_info[2]),
3487
(None, new_executable))
3488
# dont descend into this unversioned path if it is
3490
if current_path_info[2] in ('directory'):
3491
del current_dir_info[1][path_index]
3493
# dont descend the disk iterator into any tree
3495
if current_path_info[2] == 'tree-reference':
3496
del current_dir_info[1][path_index]
3499
if path_index < len(current_dir_info[1]):
3500
current_path_info = current_dir_info[1][path_index]
3501
if current_path_info[2] == 'directory':
3502
if self.tree._directory_is_tree_reference(
3503
current_path_info[0].decode('utf8')):
3504
current_path_info = current_path_info[:2] + \
3505
('tree-reference',) + current_path_info[3:]
3507
current_path_info = None
3508
path_handled = False
3510
advance_path = True # reset the advance flagg.
3511
if current_block is not None:
3513
if (block_index < len(self.state._dirblocks) and
3514
osutils.is_inside(current_root, self.state._dirblocks[block_index][0])):
3515
current_block = self.state._dirblocks[block_index]
3517
current_block = None
3518
if current_dir_info is not None:
3520
current_dir_info = dir_iterator.next()
3521
except StopIteration:
3522
current_dir_info = None
3523
_process_entry = ProcessEntryPython
3526
# Try to load the compiled form if possible
3528
from bzrlib._dirstate_helpers_c import (
3529
_read_dirblocks_c as _read_dirblocks,
3530
bisect_dirblock_c as bisect_dirblock,
3531
_bisect_path_left_c as _bisect_path_left,
3532
_bisect_path_right_c as _bisect_path_right,
3533
cmp_by_dirs_c as cmp_by_dirs,
3534
ProcessEntryC as _process_entry,
3535
update_entry as update_entry,
3538
from bzrlib._dirstate_helpers_py import (
3539
_read_dirblocks_py as _read_dirblocks,
3540
bisect_dirblock_py as bisect_dirblock,
3541
_bisect_path_left_py as _bisect_path_left,
3542
_bisect_path_right_py as _bisect_path_right,
3543
cmp_by_dirs_py as cmp_by_dirs,