1
# Copyright (C) 2006, 2007 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 3", NL;
33
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", WHOLE_NUMBER, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but its a file. The fingerprint is a
87
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
89
't' is a reference to a nested subtree; the fingerprint is the referenced
94
The entries on disk and in memory are ordered according to the following keys:
96
directory, as a list of components
100
--- Format 1 had the following different definition: ---
101
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
102
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
104
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
105
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
108
PARENT ROW's are emitted for every parent that is not in the ghosts details
109
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
110
each row will have a PARENT ROW for foo and baz, but not for bar.
113
In any tree, a kind of 'moved' indicates that the fingerprint field
114
(which we treat as opaque data specific to the 'kind' anyway) has the
115
details for the id of this row in that tree.
117
I'm strongly tempted to add a id->path index as well, but I think that
118
where we need id->path mapping; we also usually read the whole file, so
119
I'm going to skip that for the moment, as we have the ability to locate
120
via bisect any path in any tree, and if we lookup things by path, we can
121
accumulate an id->path mapping as we go, which will tend to match what we
124
I plan to implement this asap, so please speak up now to alter/tweak the
125
design - and once we stabilise on this, I'll update the wiki page for
128
The rationale for all this is that we want fast operations for the
129
common case (diff/status/commit/merge on all files) and extremely fast
130
operations for the less common but still occurs a lot status/diff/commit
131
on specific files). Operations on specific files involve a scan for all
132
the children of a path, *in every involved tree*, which the current
133
format did not accommodate.
137
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
138
2) fall back current object model as needed.
139
3) scale usably to the largest trees known today - say 50K entries. (mozilla
140
is an example of this)
144
Eventually reuse dirstate objects across locks IFF the dirstate file has not
145
been modified, but will require that we flush/ignore cached stat-hit data
146
because we won't want to restat all files on disk just because a lock was
147
acquired, yet we cannot trust the data after the previous lock was released.
149
Memory representation:
150
vector of all directories, and vector of the childen ?
152
root_entrie = (direntry for root, [parent_direntries_for_root]),
154
('', ['data for achild', 'data for bchild', 'data for cchild'])
155
('dir', ['achild', 'cchild', 'echild'])
157
- single bisect to find N subtrees from a path spec
158
- in-order for serialisation - this is 'dirblock' grouping.
159
- insertion of a file '/a' affects only the '/' child-vector, that is, to
160
insert 10K elements from scratch does not generates O(N^2) memoves of a
161
single vector, rather each individual, which tends to be limited to a
162
manageable number. Will scale badly on trees with 10K entries in a
163
single directory. compare with Inventory.InventoryDirectory which has
164
a dictionary for the children. No bisect capability, can only probe for
165
exact matches, or grab all elements and sort.
166
- What's the risk of error here? Once we have the base format being processed
167
we should have a net win regardless of optimality. So we are going to
168
go with what seems reasonable.
171
Maybe we should do a test profile of the core structure - 10K simulated
172
searches/lookups/etc?
174
Objects for each row?
175
The lifetime of Dirstate objects is current per lock, but see above for
176
possible extensions. The lifetime of a row from a dirstate is expected to be
177
very short in the optimistic case: which we are optimising for. For instance,
178
subtree status will determine from analysis of the disk data what rows need to
179
be examined at all, and will be able to determine from a single row whether
180
that file has altered or not, so we are aiming to process tens of thousands of
181
entries each second within the dirstate context, before exposing anything to
182
the larger codebase. This suggests we want the time for a single file
183
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
184
processed, and to scale to 100 thousand we'll another order of magnitude to do
185
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
186
the file on disk, and then immediately discard, the overhead of object creation
187
becomes a significant cost.
189
Figures: Creating a tuple from from 3 elements was profiled at 0.0625
190
microseconds, whereas creating a object which is subclassed from tuple was
191
0.500 microseconds, and creating an object with 3 elements and slots was 3
192
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
193
down to 10 microseconds for the total processing - having 33% of that be object
194
creation is a huge overhead. There is a potential cost in using tuples within
195
each row which is that the conditional code to do comparisons may be slower
196
than method invocation, but method invocation is known to be slow due to stack
197
frame creation, so avoiding methods in these tight inner loops in unfortunately
198
desirable. We can consider a pyrex version of this with objects in future if
207
from stat import S_IEXEC
225
def pack_stat(st, _encode=binascii.b2a_base64, _pack=struct.pack):
226
"""Convert stat values into a packed representation."""
227
# jam 20060614 it isn't really worth removing more entries if we
228
# are going to leave it in packed form.
229
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
230
# With all entries, filesize is 5.9M and read time is maybe 280ms
231
# well within the noise margin
233
# base64 encoding always adds a final newline, so strip it off
234
# The current version
235
return _encode(_pack('>LLLLLL'
236
, st.st_size, int(st.st_mtime), int(st.st_ctime)
237
, st.st_dev, st.st_ino & 0xFFFFFFFF, st.st_mode))[:-1]
238
# This is 0.060s / 1.520s faster by not encoding as much information
239
# return _encode(_pack('>LL', int(st.st_mtime), st.st_mode))[:-1]
240
# This is not strictly faster than _encode(_pack())[:-1]
241
# return '%X.%X.%X.%X.%X.%X' % (
242
# st.st_size, int(st.st_mtime), int(st.st_ctime),
243
# st.st_dev, st.st_ino, st.st_mode)
244
# Similar to the _encode(_pack('>LL'))
245
# return '%X.%X' % (int(st.st_mtime), st.st_mode)
248
class DirState(object):
249
"""Record directory and metadata state for fast access.
251
A dirstate is a specialised data structure for managing local working
252
tree state information. Its not yet well defined whether it is platform
253
specific, and if it is how we detect/parameterise that.
255
Dirstates use the usual lock_write, lock_read and unlock mechanisms.
256
Unlike most bzr disk formats, DirStates must be locked for reading, using
257
lock_read. (This is an os file lock internally.) This is necessary
258
because the file can be rewritten in place.
260
DirStates must be explicitly written with save() to commit changes; just
261
unlocking them does not write the changes to disk.
264
_kind_to_minikind = {
270
'tree-reference': 't',
272
_minikind_to_kind = {
278
't': 'tree-reference',
280
_stat_to_minikind = {
285
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
286
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
289
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
290
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
291
# that we are sure a single record will always fit.
292
BISECT_PAGE_SIZE = 4096
295
IN_MEMORY_UNMODIFIED = 1
296
IN_MEMORY_MODIFIED = 2
298
# A pack_stat (the x's) that is just noise and will never match the output
301
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
303
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
304
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
306
def __init__(self, path):
307
"""Create a DirState object.
309
:param path: The path at which the dirstate file on disk should live.
311
# _header_state and _dirblock_state represent the current state
312
# of the dirstate metadata and the per-row data respectiely.
313
# NOT_IN_MEMORY indicates that no data is in memory
314
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
315
# is the same as is on disk
316
# IN_MEMORY_MODIFIED indicates that we have a modified version
317
# of what is on disk.
318
# In future we will add more granularity, for instance _dirblock_state
319
# will probably support partially-in-memory as a separate variable,
320
# allowing for partially-in-memory unmodified and partially-in-memory
322
self._header_state = DirState.NOT_IN_MEMORY
323
self._dirblock_state = DirState.NOT_IN_MEMORY
327
self._state_file = None
328
self._filename = path
329
self._lock_token = None
330
self._lock_state = None
331
self._id_index = None
332
self._end_of_header = None
333
self._cutoff_time = None
334
self._split_path_cache = {}
335
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
336
if 'hashcache' in debug.debug_flags:
337
self._sha1_file = self._sha1_file_and_mutter
339
self._sha1_file = osutils.sha_file_by_name
340
# These two attributes provide a simple cache for lookups into the
341
# dirstate in-memory vectors. By probing respectively for the last
342
# block, and for the next entry, we save nearly 2 bisections per path
344
self._last_block_index = None
345
self._last_entry_index = None
349
(self.__class__.__name__, self._filename)
351
def add(self, path, file_id, kind, stat, fingerprint):
352
"""Add a path to be tracked.
354
:param path: The path within the dirstate - '' is the root, 'foo' is the
355
path foo within the root, 'foo/bar' is the path bar within foo
357
:param file_id: The file id of the path being added.
358
:param kind: The kind of the path, as a string like 'file',
360
:param stat: The output of os.lstat for the path.
361
:param fingerprint: The sha value of the file,
362
or the target of a symlink,
363
or the referenced revision id for tree-references,
364
or '' for directories.
367
# find the block its in.
368
# find the location in the block.
369
# check its not there
371
#------- copied from inventory.ensure_normalized_name - keep synced.
372
# --- normalized_filename wants a unicode basename only, so get one.
373
dirname, basename = osutils.split(path)
374
# we dont import normalized_filename directly because we want to be
375
# able to change the implementation at runtime for tests.
376
norm_name, can_access = osutils.normalized_filename(basename)
377
if norm_name != basename:
381
raise errors.InvalidNormalization(path)
382
# you should never have files called . or ..; just add the directory
383
# in the parent, or according to the special treatment for the root
384
if basename == '.' or basename == '..':
385
raise errors.InvalidEntryName(path)
386
# now that we've normalised, we need the correct utf8 path and
387
# dirname and basename elements. This single encode and split should be
388
# faster than three separate encodes.
389
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
390
dirname, basename = osutils.split(utf8path)
391
assert file_id.__class__ == str, \
392
"must be a utf8 file_id not %s" % (type(file_id))
393
# Make sure the file_id does not exist in this tree
394
file_id_entry = self._get_entry(0, fileid_utf8=file_id)
395
if file_id_entry != (None, None):
396
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
397
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
398
info = '%s:%s' % (kind, path)
399
raise errors.DuplicateFileId(file_id, info)
400
first_key = (dirname, basename, '')
401
block_index, present = self._find_block_index_from_key(first_key)
403
# check the path is not in the tree
404
block = self._dirblocks[block_index][1]
405
entry_index, _ = self._find_entry_index(first_key, block)
406
while (entry_index < len(block) and
407
block[entry_index][0][0:2] == first_key[0:2]):
408
if block[entry_index][1][0][0] not in 'ar':
409
# this path is in the dirstate in the current tree.
410
raise Exception, "adding already added path!"
413
# The block where we want to put the file is not present. But it
414
# might be because the directory was empty, or not loaded yet. Look
415
# for a parent entry, if not found, raise NotVersionedError
416
parent_dir, parent_base = osutils.split(dirname)
417
parent_block_idx, parent_entry_idx, _, parent_present = \
418
self._get_block_entry_index(parent_dir, parent_base, 0)
419
if not parent_present:
420
raise errors.NotVersionedError(path, str(self))
421
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
422
block = self._dirblocks[block_index][1]
423
entry_key = (dirname, basename, file_id)
426
packed_stat = DirState.NULLSTAT
429
packed_stat = pack_stat(stat)
430
parent_info = self._empty_parent_info()
431
minikind = DirState._kind_to_minikind[kind]
433
entry_data = entry_key, [
434
(minikind, fingerprint, size, False, packed_stat),
436
elif kind == 'directory':
437
entry_data = entry_key, [
438
(minikind, '', 0, False, packed_stat),
440
elif kind == 'symlink':
441
entry_data = entry_key, [
442
(minikind, fingerprint, size, False, packed_stat),
444
elif kind == 'tree-reference':
445
entry_data = entry_key, [
446
(minikind, fingerprint, 0, False, packed_stat),
449
raise errors.BzrError('unknown kind %r' % kind)
450
entry_index, present = self._find_entry_index(entry_key, block)
452
block.insert(entry_index, entry_data)
454
assert block[entry_index][1][0][0] == 'a', " %r(%r) already added" % (basename, file_id)
455
block[entry_index][1][0] = entry_data[1][0]
457
if kind == 'directory':
458
# insert a new dirblock
459
self._ensure_block(block_index, entry_index, utf8path)
460
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
462
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
464
def _bisect(self, paths):
465
"""Bisect through the disk structure for specific rows.
467
:param paths: A list of paths to find
468
:return: A dict mapping path => entries for found entries. Missing
469
entries will not be in the map.
470
The list is not sorted, and entries will be populated
471
based on when they were read.
473
self._requires_lock()
474
# We need the file pointer to be right after the initial header block
475
self._read_header_if_needed()
476
# If _dirblock_state was in memory, we should just return info from
477
# there, this function is only meant to handle when we want to read
479
assert self._dirblock_state == DirState.NOT_IN_MEMORY
481
# The disk representation is generally info + '\0\n\0' at the end. But
482
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
483
# Because it means we can sync on the '\n'
484
state_file = self._state_file
485
file_size = os.fstat(state_file.fileno()).st_size
486
# We end up with 2 extra fields, we should have a trailing '\n' to
487
# ensure that we read the whole record, and we should have a precursur
488
# '' which ensures that we start after the previous '\n'
489
entry_field_count = self._fields_per_entry() + 1
491
low = self._end_of_header
492
high = file_size - 1 # Ignore the final '\0'
493
# Map from (dir, name) => entry
496
# Avoid infinite seeking
497
max_count = 30*len(paths)
499
# pending is a list of places to look.
500
# each entry is a tuple of low, high, dir_names
501
# low -> the first byte offset to read (inclusive)
502
# high -> the last byte offset (inclusive)
503
# dir_names -> The list of (dir, name) pairs that should be found in
504
# the [low, high] range
505
pending = [(low, high, paths)]
507
page_size = self._bisect_page_size
509
fields_to_entry = self._get_fields_to_entry()
512
low, high, cur_files = pending.pop()
514
if not cur_files or low >= high:
519
if count > max_count:
520
raise errors.BzrError('Too many seeks, most likely a bug.')
522
mid = max(low, (low+high-page_size)/2)
525
# limit the read size, so we don't end up reading data that we have
527
read_size = min(page_size, (high-mid)+1)
528
block = state_file.read(read_size)
531
entries = block.split('\n')
534
# We didn't find a '\n', so we cannot have found any records.
535
# So put this range back and try again. But we know we have to
536
# increase the page size, because a single read did not contain
537
# a record break (so records must be larger than page_size)
539
pending.append((low, high, cur_files))
542
# Check the first and last entries, in case they are partial, or if
543
# we don't care about the rest of this page
545
first_fields = entries[0].split('\0')
546
if len(first_fields) < entry_field_count:
547
# We didn't get the complete first entry
548
# so move start, and grab the next, which
549
# should be a full entry
550
start += len(entries[0])+1
551
first_fields = entries[1].split('\0')
554
if len(first_fields) <= 2:
555
# We didn't even get a filename here... what do we do?
556
# Try a large page size and repeat this query
558
pending.append((low, high, cur_files))
561
# Find what entries we are looking for, which occur before and
562
# after this first record.
565
first_path = first_fields[1] + '/' + first_fields[2]
567
first_path = first_fields[2]
568
first_loc = _bisect_path_left(cur_files, first_path)
570
# These exist before the current location
571
pre = cur_files[:first_loc]
572
# These occur after the current location, which may be in the
573
# data we read, or might be after the last entry
574
post = cur_files[first_loc:]
576
if post and len(first_fields) >= entry_field_count:
577
# We have files after the first entry
579
# Parse the last entry
580
last_entry_num = len(entries)-1
581
last_fields = entries[last_entry_num].split('\0')
582
if len(last_fields) < entry_field_count:
583
# The very last hunk was not complete,
584
# read the previous hunk
585
after = mid + len(block) - len(entries[-1])
587
last_fields = entries[last_entry_num].split('\0')
589
after = mid + len(block)
592
last_path = last_fields[1] + '/' + last_fields[2]
594
last_path = last_fields[2]
595
last_loc = _bisect_path_right(post, last_path)
597
middle_files = post[:last_loc]
598
post = post[last_loc:]
601
# We have files that should occur in this block
602
# (>= first, <= last)
603
# Either we will find them here, or we can mark them as
606
if middle_files[0] == first_path:
607
# We might need to go before this location
608
pre.append(first_path)
609
if middle_files[-1] == last_path:
610
post.insert(0, last_path)
612
# Find out what paths we have
613
paths = {first_path:[first_fields]}
614
# last_path might == first_path so we need to be
615
# careful if we should append rather than overwrite
616
if last_entry_num != first_entry_num:
617
paths.setdefault(last_path, []).append(last_fields)
618
for num in xrange(first_entry_num+1, last_entry_num):
619
# TODO: jam 20070223 We are already splitting here, so
620
# shouldn't we just split the whole thing rather
621
# than doing the split again in add_one_record?
622
fields = entries[num].split('\0')
624
path = fields[1] + '/' + fields[2]
627
paths.setdefault(path, []).append(fields)
629
for path in middle_files:
630
for fields in paths.get(path, []):
631
# offset by 1 because of the opening '\0'
632
# consider changing fields_to_entry to avoid the
634
entry = fields_to_entry(fields[1:])
635
found.setdefault(path, []).append(entry)
637
# Now we have split up everything into pre, middle, and post, and
638
# we have handled everything that fell in 'middle'.
639
# We add 'post' first, so that we prefer to seek towards the
640
# beginning, so that we will tend to go as early as we need, and
641
# then only seek forward after that.
643
pending.append((after, high, post))
645
pending.append((low, start-1, pre))
647
# Consider that we may want to return the directory entries in sorted
648
# order. For now, we just return them in whatever order we found them,
649
# and leave it up to the caller if they care if it is ordered or not.
652
def _bisect_dirblocks(self, dir_list):
653
"""Bisect through the disk structure to find entries in given dirs.
655
_bisect_dirblocks is meant to find the contents of directories, which
656
differs from _bisect, which only finds individual entries.
658
:param dir_list: A sorted list of directory names ['', 'dir', 'foo'].
659
:return: A map from dir => entries_for_dir
661
# TODO: jam 20070223 A lot of the bisecting logic could be shared
662
# between this and _bisect. It would require parameterizing the
663
# inner loop with a function, though. We should evaluate the
664
# performance difference.
665
self._requires_lock()
666
# We need the file pointer to be right after the initial header block
667
self._read_header_if_needed()
668
# If _dirblock_state was in memory, we should just return info from
669
# there, this function is only meant to handle when we want to read
671
assert self._dirblock_state == DirState.NOT_IN_MEMORY
673
# The disk representation is generally info + '\0\n\0' at the end. But
674
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
675
# Because it means we can sync on the '\n'
676
state_file = self._state_file
677
file_size = os.fstat(state_file.fileno()).st_size
678
# We end up with 2 extra fields, we should have a trailing '\n' to
679
# ensure that we read the whole record, and we should have a precursur
680
# '' which ensures that we start after the previous '\n'
681
entry_field_count = self._fields_per_entry() + 1
683
low = self._end_of_header
684
high = file_size - 1 # Ignore the final '\0'
685
# Map from dir => entry
688
# Avoid infinite seeking
689
max_count = 30*len(dir_list)
691
# pending is a list of places to look.
692
# each entry is a tuple of low, high, dir_names
693
# low -> the first byte offset to read (inclusive)
694
# high -> the last byte offset (inclusive)
695
# dirs -> The list of directories that should be found in
696
# the [low, high] range
697
pending = [(low, high, dir_list)]
699
page_size = self._bisect_page_size
701
fields_to_entry = self._get_fields_to_entry()
704
low, high, cur_dirs = pending.pop()
706
if not cur_dirs or low >= high:
711
if count > max_count:
712
raise errors.BzrError('Too many seeks, most likely a bug.')
714
mid = max(low, (low+high-page_size)/2)
717
# limit the read size, so we don't end up reading data that we have
719
read_size = min(page_size, (high-mid)+1)
720
block = state_file.read(read_size)
723
entries = block.split('\n')
726
# We didn't find a '\n', so we cannot have found any records.
727
# So put this range back and try again. But we know we have to
728
# increase the page size, because a single read did not contain
729
# a record break (so records must be larger than page_size)
731
pending.append((low, high, cur_dirs))
734
# Check the first and last entries, in case they are partial, or if
735
# we don't care about the rest of this page
737
first_fields = entries[0].split('\0')
738
if len(first_fields) < entry_field_count:
739
# We didn't get the complete first entry
740
# so move start, and grab the next, which
741
# should be a full entry
742
start += len(entries[0])+1
743
first_fields = entries[1].split('\0')
746
if len(first_fields) <= 1:
747
# We didn't even get a dirname here... what do we do?
748
# Try a large page size and repeat this query
750
pending.append((low, high, cur_dirs))
753
# Find what entries we are looking for, which occur before and
754
# after this first record.
756
first_dir = first_fields[1]
757
first_loc = bisect.bisect_left(cur_dirs, first_dir)
759
# These exist before the current location
760
pre = cur_dirs[:first_loc]
761
# These occur after the current location, which may be in the
762
# data we read, or might be after the last entry
763
post = cur_dirs[first_loc:]
765
if post and len(first_fields) >= entry_field_count:
766
# We have records to look at after the first entry
768
# Parse the last entry
769
last_entry_num = len(entries)-1
770
last_fields = entries[last_entry_num].split('\0')
771
if len(last_fields) < entry_field_count:
772
# The very last hunk was not complete,
773
# read the previous hunk
774
after = mid + len(block) - len(entries[-1])
776
last_fields = entries[last_entry_num].split('\0')
778
after = mid + len(block)
780
last_dir = last_fields[1]
781
last_loc = bisect.bisect_right(post, last_dir)
783
middle_files = post[:last_loc]
784
post = post[last_loc:]
787
# We have files that should occur in this block
788
# (>= first, <= last)
789
# Either we will find them here, or we can mark them as
792
if middle_files[0] == first_dir:
793
# We might need to go before this location
794
pre.append(first_dir)
795
if middle_files[-1] == last_dir:
796
post.insert(0, last_dir)
798
# Find out what paths we have
799
paths = {first_dir:[first_fields]}
800
# last_dir might == first_dir so we need to be
801
# careful if we should append rather than overwrite
802
if last_entry_num != first_entry_num:
803
paths.setdefault(last_dir, []).append(last_fields)
804
for num in xrange(first_entry_num+1, last_entry_num):
805
# TODO: jam 20070223 We are already splitting here, so
806
# shouldn't we just split the whole thing rather
807
# than doing the split again in add_one_record?
808
fields = entries[num].split('\0')
809
paths.setdefault(fields[1], []).append(fields)
811
for cur_dir in middle_files:
812
for fields in paths.get(cur_dir, []):
813
# offset by 1 because of the opening '\0'
814
# consider changing fields_to_entry to avoid the
816
entry = fields_to_entry(fields[1:])
817
found.setdefault(cur_dir, []).append(entry)
819
# Now we have split up everything into pre, middle, and post, and
820
# we have handled everything that fell in 'middle'.
821
# We add 'post' first, so that we prefer to seek towards the
822
# beginning, so that we will tend to go as early as we need, and
823
# then only seek forward after that.
825
pending.append((after, high, post))
827
pending.append((low, start-1, pre))
831
def _bisect_recursive(self, paths):
832
"""Bisect for entries for all paths and their children.
834
This will use bisect to find all records for the supplied paths. It
835
will then continue to bisect for any records which are marked as
836
directories. (and renames?)
838
:param paths: A sorted list of (dir, name) pairs
839
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
840
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
842
# Map from (dir, name, file_id) => [tree_info]
845
found_dir_names = set()
847
# Directories that have been read
848
processed_dirs = set()
849
# Get the ball rolling with the first bisect for all entries.
850
newly_found = self._bisect(paths)
853
# Directories that need to be read
855
paths_to_search = set()
856
for entry_list in newly_found.itervalues():
857
for dir_name_id, trees_info in entry_list:
858
found[dir_name_id] = trees_info
859
found_dir_names.add(dir_name_id[:2])
861
for tree_info in trees_info:
862
minikind = tree_info[0]
865
# We already processed this one as a directory,
866
# we don't need to do the extra work again.
868
subdir, name, file_id = dir_name_id
869
path = osutils.pathjoin(subdir, name)
871
if path not in processed_dirs:
872
pending_dirs.add(path)
873
elif minikind == 'r':
874
# Rename, we need to directly search the target
875
# which is contained in the fingerprint column
876
dir_name = osutils.split(tree_info[1])
877
if dir_name[0] in pending_dirs:
878
# This entry will be found in the dir search
880
if dir_name not in found_dir_names:
881
paths_to_search.add(tree_info[1])
882
# Now we have a list of paths to look for directly, and
883
# directory blocks that need to be read.
884
# newly_found is mixing the keys between (dir, name) and path
885
# entries, but that is okay, because we only really care about the
887
newly_found = self._bisect(sorted(paths_to_search))
888
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
889
processed_dirs.update(pending_dirs)
892
def _discard_merge_parents(self):
893
"""Discard any parents trees beyond the first.
895
Note that if this fails the dirstate is corrupted.
897
After this function returns the dirstate contains 2 trees, neither of
900
self._read_header_if_needed()
901
parents = self.get_parent_ids()
904
# only require all dirblocks if we are doing a full-pass removal.
905
self._read_dirblocks_if_needed()
906
dead_patterns = set([('a', 'r'), ('a', 'a'), ('r', 'r'), ('r', 'a')])
907
def iter_entries_removable():
908
for block in self._dirblocks:
909
deleted_positions = []
910
for pos, entry in enumerate(block[1]):
912
if (entry[1][0][0], entry[1][1][0]) in dead_patterns:
913
deleted_positions.append(pos)
914
if deleted_positions:
915
if len(deleted_positions) == len(block):
918
for pos in reversed(deleted_positions):
920
# if the first parent is a ghost:
921
if parents[0] in self.get_ghosts():
922
empty_parent = [DirState.NULL_PARENT_DETAILS]
923
for entry in iter_entries_removable():
924
entry[1][1:] = empty_parent
926
for entry in iter_entries_removable():
930
self._parents = [parents[0]]
931
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
932
self._header_state = DirState.IN_MEMORY_MODIFIED
934
def _empty_parent_info(self):
935
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
938
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
939
"""Ensure a block for dirname exists.
941
This function exists to let callers which know that there is a
942
directory dirname ensure that the block for it exists. This block can
943
fail to exist because of demand loading, or because a directory had no
944
children. In either case it is not an error. It is however an error to
945
call this if there is no parent entry for the directory, and thus the
946
function requires the coordinates of such an entry to be provided.
948
The root row is special cased and can be indicated with a parent block
951
:param parent_block_index: The index of the block in which dirname's row
953
:param parent_row_index: The index in the parent block where the row
955
:param dirname: The utf8 dirname to ensure there is a block for.
956
:return: The index for the block.
958
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
959
# This is the signature of the root row, and the
960
# contents-of-root row is always index 1
962
# the basename of the directory must be the end of its full name.
963
if not (parent_block_index == -1 and
964
parent_block_index == -1 and dirname == ''):
965
assert dirname.endswith(
966
self._dirblocks[parent_block_index][1][parent_row_index][0][1])
967
block_index, present = self._find_block_index_from_key((dirname, '', ''))
969
## In future, when doing partial parsing, this should load and
970
# populate the entire block.
971
self._dirblocks.insert(block_index, (dirname, []))
974
def _entries_to_current_state(self, new_entries):
975
"""Load new_entries into self.dirblocks.
977
Process new_entries into the current state object, making them the active
978
state. The entries are grouped together by directory to form dirblocks.
980
:param new_entries: A sorted list of entries. This function does not sort
981
to prevent unneeded overhead when callers have a sorted list already.
984
assert new_entries[0][0][0:2] == ('', ''), \
985
"Missing root row %r" % (new_entries[0][0],)
986
# The two blocks here are deliberate: the root block and the
987
# contents-of-root block.
988
self._dirblocks = [('', []), ('', [])]
989
current_block = self._dirblocks[0][1]
992
append_entry = current_block.append
993
for entry in new_entries:
994
if entry[0][0] != current_dirname:
995
# new block - different dirname
997
current_dirname = entry[0][0]
998
self._dirblocks.append((current_dirname, current_block))
999
append_entry = current_block.append
1000
# append the entry to the current block
1002
self._split_root_dirblock_into_contents()
1004
def _split_root_dirblock_into_contents(self):
1005
"""Split the root dirblocks into root and contents-of-root.
1007
After parsing by path, we end up with root entries and contents-of-root
1008
entries in the same block. This loop splits them out again.
1010
# The above loop leaves the "root block" entries mixed with the
1011
# "contents-of-root block". But we don't want an if check on
1012
# all entries, so instead we just fix it up here.
1013
assert self._dirblocks[1] == ('', [])
1015
contents_of_root_block = []
1016
for entry in self._dirblocks[0][1]:
1017
if not entry[0][1]: # This is a root entry
1018
root_block.append(entry)
1020
contents_of_root_block.append(entry)
1021
self._dirblocks[0] = ('', root_block)
1022
self._dirblocks[1] = ('', contents_of_root_block)
1024
def _entry_to_line(self, entry):
1025
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
1027
:param entry: An entry_tuple as defined in the module docstring.
1029
entire_entry = list(entry[0])
1030
for tree_number, tree_data in enumerate(entry[1]):
1031
# (minikind, fingerprint, size, executable, tree_specific_string)
1032
entire_entry.extend(tree_data)
1033
# 3 for the key, 5 for the fields per tree.
1034
tree_offset = 3 + tree_number * 5
1036
entire_entry[tree_offset + 0] = tree_data[0]
1038
entire_entry[tree_offset + 2] = str(tree_data[2])
1040
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
1041
return '\0'.join(entire_entry)
1043
def _fields_per_entry(self):
1044
"""How many null separated fields should be in each entry row.
1046
Each line now has an extra '\n' field which is not used
1047
so we just skip over it
1049
3 fields for the key
1050
+ number of fields per tree_data (5) * tree count
1053
tree_count = 1 + self._num_present_parents()
1054
return 3 + 5 * tree_count + 1
1056
def _find_block(self, key, add_if_missing=False):
1057
"""Return the block that key should be present in.
1059
:param key: A dirstate entry key.
1060
:return: The block tuple.
1062
block_index, present = self._find_block_index_from_key(key)
1064
if not add_if_missing:
1065
# check to see if key is versioned itself - we might want to
1066
# add it anyway, because dirs with no entries dont get a
1067
# dirblock at parse time.
1068
# This is an uncommon branch to take: most dirs have children,
1069
# and most code works with versioned paths.
1070
parent_base, parent_name = osutils.split(key[0])
1071
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
1072
# some parent path has not been added - its an error to add
1074
raise errors.NotVersionedError(key[0:2], str(self))
1075
self._dirblocks.insert(block_index, (key[0], []))
1076
return self._dirblocks[block_index]
1078
def _find_block_index_from_key(self, key):
1079
"""Find the dirblock index for a key.
1081
:return: The block index, True if the block for the key is present.
1083
if key[0:2] == ('', ''):
1086
if (self._last_block_index is not None and
1087
self._dirblocks[self._last_block_index][0] == key[0]):
1088
return self._last_block_index, True
1091
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
1092
cache=self._split_path_cache)
1093
# _right returns one-past-where-key is so we have to subtract
1094
# one to use it. we use _right here because there are two
1095
# '' blocks - the root, and the contents of root
1096
# we always have a minimum of 2 in self._dirblocks: root and
1097
# root-contents, and for '', we get 2 back, so this is
1098
# simple and correct:
1099
present = (block_index < len(self._dirblocks) and
1100
self._dirblocks[block_index][0] == key[0])
1101
self._last_block_index = block_index
1102
# Reset the entry index cache to the beginning of the block.
1103
self._last_entry_index = -1
1104
return block_index, present
1106
def _find_entry_index(self, key, block):
1107
"""Find the entry index for a key in a block.
1109
:return: The entry index, True if the entry for the key is present.
1111
len_block = len(block)
1113
if self._last_entry_index is not None:
1115
entry_index = self._last_entry_index + 1
1116
# A hit is when the key is after the last slot, and before or
1117
# equal to the next slot.
1118
if ((entry_index > 0 and block[entry_index - 1][0] < key) and
1119
key <= block[entry_index][0]):
1120
self._last_entry_index = entry_index
1121
present = (block[entry_index][0] == key)
1122
return entry_index, present
1125
entry_index = bisect.bisect_left(block, (key, []))
1126
present = (entry_index < len_block and
1127
block[entry_index][0] == key)
1128
self._last_entry_index = entry_index
1129
return entry_index, present
1132
def from_tree(tree, dir_state_filename):
1133
"""Create a dirstate from a bzr Tree.
1135
:param tree: The tree which should provide parent information and
1137
:return: a DirState object which is currently locked for writing.
1138
(it was locked by DirState.initialize)
1140
result = DirState.initialize(dir_state_filename)
1144
parent_ids = tree.get_parent_ids()
1145
num_parents = len(parent_ids)
1147
for parent_id in parent_ids:
1148
parent_tree = tree.branch.repository.revision_tree(parent_id)
1149
parent_trees.append((parent_id, parent_tree))
1150
parent_tree.lock_read()
1151
result.set_parent_trees(parent_trees, [])
1152
result.set_state_from_inventory(tree.inventory)
1154
for revid, parent_tree in parent_trees:
1155
parent_tree.unlock()
1158
# The caller won't have a chance to unlock this, so make sure we
1164
def update_basis_by_delta(self, delta, new_revid):
1165
"""Update the parents of this tree after a commit.
1167
This gives the tree one parent, with revision id new_revid. The
1168
inventory delta is applied to the current basis tree to generate the
1169
inventory for the parent new_revid, and all other parent trees are
1172
Note that an exception during the operation of this method will leave
1173
the dirstate in a corrupt state where it should not be saved.
1175
Finally, we expect all changes to be synchronising the basis tree with
1178
:param new_revid: The new revision id for the trees parent.
1179
:param delta: An inventory delta (see apply_inventory_delta) describing
1180
the changes from the current left most parent revision to new_revid.
1182
self._read_dirblocks_if_needed()
1183
self._discard_merge_parents()
1184
if self._ghosts != []:
1185
raise NotImplementedError(self.update_basis_by_delta)
1186
if len(self._parents) == 0:
1187
# setup a blank tree, the most simple way.
1188
empty_parent = DirState.NULL_PARENT_DETAILS
1189
for entry in self._iter_entries():
1190
entry[1].append(empty_parent)
1191
self._parents.append(new_revid)
1193
self._parents[0] = new_revid
1195
delta = sorted(delta, reverse=True)
1199
# The paths this function accepts are unicode and must be encoded as we
1201
encode = cache_utf8.encode
1202
inv_to_entry = self._inv_entry_to_details
1203
# delta is now (deletes, changes), (adds) in reverse lexographical
1205
# deletes in reverse lexographic order are safe to process in situ.
1206
# renames are not, as a rename from any path could go to a path
1207
# lexographically lower, so we transform renames into delete, add pairs,
1208
# expanding them recursively as needed.
1209
# At the same time, to reduce interface friction we convert the input
1210
# inventory entries to dirstate.
1212
for old_path, new_path, file_id, inv_entry in delta:
1213
if old_path is None:
1214
adds.append((None, encode(new_path), file_id,
1215
inv_to_entry(inv_entry)))
1216
elif new_path is None:
1217
deletes.append((encode(old_path), None, file_id, None))
1218
elif old_path != new_path:
1220
# Because renames must preserve their children we must have
1221
# processed all reloations and removes before hand. The sort
1222
# order ensures we've examined the child paths, but we also
1223
# have to execute the removals, or the split to an add/delete
1224
# pair will result in the deleted item being reinserted, or
1225
# renamed items being reinserted twice - and possibly at the
1226
# wrong place. Splitting into a delete/add pair also simplifies
1227
# the handling of entries with ('f', ...), ('r' ...) because
1228
# the target of the 'r' is old_path here, and we add that to
1229
# deletes, meaning that the add handler does not need to check
1230
# for 'r' items on every pass.
1231
self._update_basis_apply_deletes(deletes)
1233
new_path_utf8 = encode(new_path)
1234
# Split into an add/delete pair recursively.
1235
adds.append((None, new_path_utf8, file_id,
1236
inv_to_entry(inv_entry)))
1237
# Remove the current contents of the tree at orig_path, and
1238
# reinsert at the correct new path.
1239
new_deletes = list(reversed(list(self._iter_child_entries(1,
1240
encode(old_path)))))
1241
for entry in new_deletes:
1242
source_path = '/'.join(entry[0][0:2])
1243
target_path = new_path_utf8 + source_path[len(old_path):]
1244
adds.append((None, target_path, entry[0][2], entry[1][1]))
1245
deletes.append((source_path, None, entry[0][2], None))
1246
deletes.append((encode(old_path), None, file_id, None))
1248
changes.append((encode(old_path), encode(new_path), file_id,
1249
inv_to_entry(inv_entry)))
1251
self._update_basis_apply_deletes(deletes)
1252
self._update_basis_apply_changes(changes)
1253
self._update_basis_apply_adds(adds)
1255
# remove all deletes
1256
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1257
self._header_state = DirState.IN_MEMORY_MODIFIED
1260
def _update_basis_apply_adds(self, adds):
1261
"""Apply a sequence of adds to tree 1 during update_basis_by_delta.
1263
They may be adds, or renames that have been split into add/delete
1266
:param adds: A sequence of adds. Each add is a tuple:
1267
(None, new_path_utf8, file_id, (entry_details))
1269
# Adds are accumulated partly from renames, so can be in any input
1272
# adds is now in lexographic order, which places all parents before
1273
# their children, so we can process it linearly.
1275
for old_path, new_path, file_id, new_details in adds:
1276
assert old_path is None
1277
# the entry for this file_id must be in tree 0.
1278
entry = self._get_entry(0, file_id, new_path)
1279
if entry[0][2] != file_id:
1280
raise errors.BzrError('dirstate: cannot apply delta, working'
1281
' tree does not contain new entry %r %r' %
1282
(new_path, file_id))
1283
if entry[1][1][0] not in absent:
1284
raise errors.BzrError('dirstate: inconsistent delta, with '
1285
'tree 0. %r %r' % (new_path, file_id))
1286
# We don't need to update the target of an 'r' because the handling
1287
# of renames turns all 'r' situations into a delete at the original
1289
entry[1][1] = new_details
1291
def _update_basis_apply_changes(self, changes):
1292
"""Apply a sequence of changes to tree 1 during update_basis_by_delta.
1294
:param adds: A sequence of changes. Each change is a tuple:
1295
(path_utf8, path_utf8, file_id, (entry_details))
1298
for old_path, new_path, file_id, new_details in changes:
1299
assert old_path == new_path
1300
# the entry for this file_id must be in tree 0.
1301
entry = self._get_entry(0, file_id, new_path)
1302
if entry[0][2] != file_id:
1303
raise errors.BzrError('dirstate: cannot apply delta, working'
1304
' tree does not contain new entry %r %r' %
1305
(new_path, file_id))
1306
if (entry[1][0][0] in absent or
1307
entry[1][1][0] in absent):
1308
raise errors.BzrError('dirstate: inconsistent delta, with '
1309
'tree 0. %r %r' % (new_path, file_id))
1310
entry[1][1] = new_details
1312
def _update_basis_apply_deletes(self, deletes):
1313
"""Apply a sequence of deletes to tree 1 during update_basis_by_delta.
1315
They may be deletes, or renames that have been split into add/delete
1318
:param adds: A sequence of deletes. Each delete is a tuple:
1319
(old_path_utf8, None, file_id, None)
1322
# Deletes are accumulated in lexographical order.
1323
for old_path, new_path, file_id, _ in deletes:
1324
assert new_path is None
1325
# the entry for this file_id must be in tree 1.
1326
dirname, basename = osutils.split(old_path)
1327
block_index, entry_index, dir_present, file_present = \
1328
self._get_block_entry_index(dirname, basename, 1)
1329
if not file_present:
1330
raise errors.BzrError('dirstate: cannot apply delta, basis'
1331
' tree does not contain new entry %r %r' %
1332
(old_path, file_id))
1333
entry = self._dirblocks[block_index][1][entry_index]
1334
if entry[0][2] != file_id:
1335
raise errors.BzrError('mismatched file_id in tree 1 %r %r' %
1336
(old_path, file_id))
1337
if entry[1][0][0] not in absent:
1338
raise errors.BzrError('dirstate: inconsistent delta, with '
1339
'tree 0. %r %r' % (old_path, file_id))
1340
del self._dirblocks[block_index][1][entry_index]
1342
def update_entry(self, entry, abspath, stat_value,
1343
_stat_to_minikind=_stat_to_minikind,
1344
_pack_stat=pack_stat):
1345
"""Update the entry based on what is actually on disk.
1347
:param entry: This is the dirblock entry for the file in question.
1348
:param abspath: The path on disk for this file.
1349
:param stat_value: (optional) if we already have done a stat on the
1351
:return: The sha1 hexdigest of the file (40 bytes) or link target of a
1355
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
1359
packed_stat = _pack_stat(stat_value)
1360
(saved_minikind, saved_link_or_sha1, saved_file_size,
1361
saved_executable, saved_packed_stat) = entry[1][0]
1363
if (minikind == saved_minikind
1364
and packed_stat == saved_packed_stat):
1365
# The stat hasn't changed since we saved, so we can re-use the
1370
# size should also be in packed_stat
1371
if saved_file_size == stat_value.st_size:
1372
return saved_link_or_sha1
1374
# If we have gotten this far, that means that we need to actually
1375
# process this entry.
1378
link_or_sha1 = self._sha1_file(abspath)
1379
executable = self._is_executable(stat_value.st_mode,
1381
if self._cutoff_time is None:
1382
self._sha_cutoff_time()
1383
if (stat_value.st_mtime < self._cutoff_time
1384
and stat_value.st_ctime < self._cutoff_time):
1385
entry[1][0] = ('f', link_or_sha1, stat_value.st_size,
1386
executable, packed_stat)
1388
entry[1][0] = ('f', '', stat_value.st_size,
1389
executable, DirState.NULLSTAT)
1390
elif minikind == 'd':
1392
entry[1][0] = ('d', '', 0, False, packed_stat)
1393
if saved_minikind != 'd':
1394
# This changed from something into a directory. Make sure we
1395
# have a directory block for it. This doesn't happen very
1396
# often, so this doesn't have to be super fast.
1397
block_index, entry_index, dir_present, file_present = \
1398
self._get_block_entry_index(entry[0][0], entry[0][1], 0)
1399
self._ensure_block(block_index, entry_index,
1400
osutils.pathjoin(entry[0][0], entry[0][1]))
1401
elif minikind == 'l':
1402
link_or_sha1 = self._read_link(abspath, saved_link_or_sha1)
1403
if self._cutoff_time is None:
1404
self._sha_cutoff_time()
1405
if (stat_value.st_mtime < self._cutoff_time
1406
and stat_value.st_ctime < self._cutoff_time):
1407
entry[1][0] = ('l', link_or_sha1, stat_value.st_size,
1410
entry[1][0] = ('l', '', stat_value.st_size,
1411
False, DirState.NULLSTAT)
1412
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1415
def _sha_cutoff_time(self):
1416
"""Return cutoff time.
1418
Files modified more recently than this time are at risk of being
1419
undetectably modified and so can't be cached.
1421
# Cache the cutoff time as long as we hold a lock.
1422
# time.time() isn't super expensive (approx 3.38us), but
1423
# when you call it 50,000 times it adds up.
1424
# For comparison, os.lstat() costs 7.2us if it is hot.
1425
self._cutoff_time = int(time.time()) - 3
1426
return self._cutoff_time
1428
def _lstat(self, abspath, entry):
1429
"""Return the os.lstat value for this path."""
1430
return os.lstat(abspath)
1432
def _sha1_file_and_mutter(self, abspath):
1433
# when -Dhashcache is turned on, this is monkey-patched in to log
1435
trace.mutter("dirstate sha1 " + abspath)
1436
return osutils.sha_file_by_name(abspath)
1438
def _is_executable(self, mode, old_executable):
1439
"""Is this file executable?"""
1440
return bool(S_IEXEC & mode)
1442
def _is_executable_win32(self, mode, old_executable):
1443
"""On win32 the executable bit is stored in the dirstate."""
1444
return old_executable
1446
if sys.platform == 'win32':
1447
_is_executable = _is_executable_win32
1449
def _read_link(self, abspath, old_link):
1450
"""Read the target of a symlink"""
1451
# TODO: jam 200700301 On Win32, this could just return the value
1452
# already in memory. However, this really needs to be done at a
1453
# higher level, because there either won't be anything on disk,
1454
# or the thing on disk will be a file.
1455
return os.readlink(abspath)
1457
def get_ghosts(self):
1458
"""Return a list of the parent tree revision ids that are ghosts."""
1459
self._read_header_if_needed()
1462
def get_lines(self):
1463
"""Serialise the entire dirstate to a sequence of lines."""
1464
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1465
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1466
# read whats on disk.
1467
self._state_file.seek(0)
1468
return self._state_file.readlines()
1470
lines.append(self._get_parents_line(self.get_parent_ids()))
1471
lines.append(self._get_ghosts_line(self._ghosts))
1472
# append the root line which is special cased
1473
lines.extend(map(self._entry_to_line, self._iter_entries()))
1474
return self._get_output_lines(lines)
1476
def _get_ghosts_line(self, ghost_ids):
1477
"""Create a line for the state file for ghost information."""
1478
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1480
def _get_parents_line(self, parent_ids):
1481
"""Create a line for the state file for parents information."""
1482
return '\0'.join([str(len(parent_ids))] + parent_ids)
1484
def _get_fields_to_entry(self):
1485
"""Get a function which converts entry fields into a entry record.
1487
This handles size and executable, as well as parent records.
1489
:return: A function which takes a list of fields, and returns an
1490
appropriate record for storing in memory.
1492
# This is intentionally unrolled for performance
1493
num_present_parents = self._num_present_parents()
1494
if num_present_parents == 0:
1495
def fields_to_entry_0_parents(fields, _int=int):
1496
path_name_file_id_key = (fields[0], fields[1], fields[2])
1497
return (path_name_file_id_key, [
1499
fields[3], # minikind
1500
fields[4], # fingerprint
1501
_int(fields[5]), # size
1502
fields[6] == 'y', # executable
1503
fields[7], # packed_stat or revision_id
1505
return fields_to_entry_0_parents
1506
elif num_present_parents == 1:
1507
def fields_to_entry_1_parent(fields, _int=int):
1508
path_name_file_id_key = (fields[0], fields[1], fields[2])
1509
return (path_name_file_id_key, [
1511
fields[3], # minikind
1512
fields[4], # fingerprint
1513
_int(fields[5]), # size
1514
fields[6] == 'y', # executable
1515
fields[7], # packed_stat or revision_id
1518
fields[8], # minikind
1519
fields[9], # fingerprint
1520
_int(fields[10]), # size
1521
fields[11] == 'y', # executable
1522
fields[12], # packed_stat or revision_id
1525
return fields_to_entry_1_parent
1526
elif num_present_parents == 2:
1527
def fields_to_entry_2_parents(fields, _int=int):
1528
path_name_file_id_key = (fields[0], fields[1], fields[2])
1529
return (path_name_file_id_key, [
1531
fields[3], # minikind
1532
fields[4], # fingerprint
1533
_int(fields[5]), # size
1534
fields[6] == 'y', # executable
1535
fields[7], # packed_stat or revision_id
1538
fields[8], # minikind
1539
fields[9], # fingerprint
1540
_int(fields[10]), # size
1541
fields[11] == 'y', # executable
1542
fields[12], # packed_stat or revision_id
1545
fields[13], # minikind
1546
fields[14], # fingerprint
1547
_int(fields[15]), # size
1548
fields[16] == 'y', # executable
1549
fields[17], # packed_stat or revision_id
1552
return fields_to_entry_2_parents
1554
def fields_to_entry_n_parents(fields, _int=int):
1555
path_name_file_id_key = (fields[0], fields[1], fields[2])
1556
trees = [(fields[cur], # minikind
1557
fields[cur+1], # fingerprint
1558
_int(fields[cur+2]), # size
1559
fields[cur+3] == 'y', # executable
1560
fields[cur+4], # stat or revision_id
1561
) for cur in xrange(3, len(fields)-1, 5)]
1562
return path_name_file_id_key, trees
1563
return fields_to_entry_n_parents
1565
def get_parent_ids(self):
1566
"""Return a list of the parent tree ids for the directory state."""
1567
self._read_header_if_needed()
1568
return list(self._parents)
1570
def _get_block_entry_index(self, dirname, basename, tree_index):
1571
"""Get the coordinates for a path in the state structure.
1573
:param dirname: The utf8 dirname to lookup.
1574
:param basename: The utf8 basename to lookup.
1575
:param tree_index: The index of the tree for which this lookup should
1577
:return: A tuple describing where the path is located, or should be
1578
inserted. The tuple contains four fields: the block index, the row
1579
index, the directory is present (boolean), the entire path is
1580
present (boolean). There is no guarantee that either
1581
coordinate is currently reachable unless the found field for it is
1582
True. For instance, a directory not present in the searched tree
1583
may be returned with a value one greater than the current highest
1584
block offset. The directory present field will always be True when
1585
the path present field is True. The directory present field does
1586
NOT indicate that the directory is present in the searched tree,
1587
rather it indicates that there are at least some files in some
1590
self._read_dirblocks_if_needed()
1591
key = dirname, basename, ''
1592
block_index, present = self._find_block_index_from_key(key)
1594
# no such directory - return the dir index and 0 for the row.
1595
return block_index, 0, False, False
1596
block = self._dirblocks[block_index][1] # access the entries only
1597
entry_index, present = self._find_entry_index(key, block)
1598
# linear search through entries at this path to find the one
1600
while entry_index < len(block) and block[entry_index][0][1] == basename:
1601
if block[entry_index][1][tree_index][0] not in \
1602
('a', 'r'): # absent, relocated
1603
return block_index, entry_index, True, True
1605
return block_index, entry_index, True, False
1607
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None):
1608
"""Get the dirstate entry for path in tree tree_index.
1610
If either file_id or path is supplied, it is used as the key to lookup.
1611
If both are supplied, the fastest lookup is used, and an error is
1612
raised if they do not both point at the same row.
1614
:param tree_index: The index of the tree we wish to locate this path
1615
in. If the path is present in that tree, the entry containing its
1616
details is returned, otherwise (None, None) is returned
1617
0 is the working tree, higher indexes are successive parent
1619
:param fileid_utf8: A utf8 file_id to look up.
1620
:param path_utf8: An utf8 path to be looked up.
1621
:return: The dirstate entry tuple for path, or (None, None)
1623
self._read_dirblocks_if_needed()
1624
if path_utf8 is not None:
1625
assert path_utf8.__class__ == str, ('path_utf8 is not a str: %s %s'
1626
% (type(path_utf8), path_utf8))
1627
# path lookups are faster
1628
dirname, basename = osutils.split(path_utf8)
1629
block_index, entry_index, dir_present, file_present = \
1630
self._get_block_entry_index(dirname, basename, tree_index)
1631
if not file_present:
1633
entry = self._dirblocks[block_index][1][entry_index]
1634
assert entry[0][2] and entry[1][tree_index][0] not in ('a', 'r'), 'unversioned entry?!?!'
1636
if entry[0][2] != fileid_utf8:
1637
raise errors.BzrError('integrity error ? : mismatching'
1638
' tree_index, file_id and path')
1641
assert fileid_utf8 is not None
1642
possible_keys = self._get_id_index().get(fileid_utf8, None)
1643
if not possible_keys:
1645
for key in possible_keys:
1646
block_index, present = \
1647
self._find_block_index_from_key(key)
1648
# strange, probably indicates an out of date
1649
# id index - for now, allow this.
1652
# WARNING: DO not change this code to use _get_block_entry_index
1653
# as that function is not suitable: it does not use the key
1654
# to lookup, and thus the wrong coordinates are returned.
1655
block = self._dirblocks[block_index][1]
1656
entry_index, present = self._find_entry_index(key, block)
1658
entry = self._dirblocks[block_index][1][entry_index]
1659
if entry[1][tree_index][0] in 'fdlt':
1660
# this is the result we are looking for: the
1661
# real home of this file_id in this tree.
1663
if entry[1][tree_index][0] == 'a':
1664
# there is no home for this entry in this tree
1666
assert entry[1][tree_index][0] == 'r', \
1667
"entry %r has invalid minikind %r for tree %r" \
1669
entry[1][tree_index][0],
1671
real_path = entry[1][tree_index][1]
1672
return self._get_entry(tree_index, fileid_utf8=fileid_utf8,
1673
path_utf8=real_path)
1677
def initialize(cls, path):
1678
"""Create a new dirstate on path.
1680
The new dirstate will be an empty tree - that is it has no parents,
1681
and only a root node - which has id ROOT_ID.
1683
:param path: The name of the file for the dirstate.
1684
:return: A write-locked DirState object.
1686
# This constructs a new DirState object on a path, sets the _state_file
1687
# to a new empty file for that path. It then calls _set_data() with our
1688
# stock empty dirstate information - a root with ROOT_ID, no children,
1689
# and no parents. Finally it calls save() to ensure that this data will
1692
# root dir and root dir contents with no children.
1693
empty_tree_dirblocks = [('', []), ('', [])]
1694
# a new root directory, with a NULLSTAT.
1695
empty_tree_dirblocks[0][1].append(
1696
(('', '', inventory.ROOT_ID), [
1697
('d', '', 0, False, DirState.NULLSTAT),
1701
result._set_data([], empty_tree_dirblocks)
1708
def _inv_entry_to_details(self, inv_entry):
1709
"""Convert an inventory entry (from a revision tree) to state details.
1711
:param inv_entry: An inventory entry whose sha1 and link targets can be
1712
relied upon, and which has a revision set.
1713
:return: A details tuple - the details for a single tree at a path +
1716
kind = inv_entry.kind
1717
minikind = DirState._kind_to_minikind[kind]
1718
tree_data = inv_entry.revision
1719
assert tree_data, 'empty revision for the inv_entry %s.' % \
1721
if kind == 'directory':
1725
elif kind == 'symlink':
1726
fingerprint = inv_entry.symlink_target or ''
1729
elif kind == 'file':
1730
fingerprint = inv_entry.text_sha1 or ''
1731
size = inv_entry.text_size or 0
1732
executable = inv_entry.executable
1733
elif kind == 'tree-reference':
1734
fingerprint = inv_entry.reference_revision or ''
1738
raise Exception("can't pack %s" % inv_entry)
1739
return (minikind, fingerprint, size, executable, tree_data)
1741
def _iter_child_entries(self, tree_index, path_utf8):
1742
"""Iterate over all the entries that are children of path_utf.
1744
This only returns entries that are present (not in 'a', 'r') in
1745
tree_index. tree_index data is not refreshed, so if tree 0 is used,
1746
results may differ from that obtained if paths were statted to
1747
determine what ones were directories.
1749
Asking for the children of a non-directory will return an empty
1753
next_pending_dirs = [path_utf8]
1755
while next_pending_dirs:
1756
pending_dirs = next_pending_dirs
1757
next_pending_dirs = []
1758
for path in pending_dirs:
1759
block_index, present = self._find_block_index_from_key(
1762
# children of a non-directory asked for.
1764
block = self._dirblocks[block_index]
1765
for entry in block[1]:
1766
kind = entry[1][tree_index][0]
1767
if kind not in absent:
1770
next_pending_dirs.append('/'.join(entry[0][0:2]))
1772
def _iter_entries(self):
1773
"""Iterate over all the entries in the dirstate.
1775
Each yelt item is an entry in the standard format described in the
1776
docstring of bzrlib.dirstate.
1778
self._read_dirblocks_if_needed()
1779
for directory in self._dirblocks:
1780
for entry in directory[1]:
1783
def _get_id_index(self):
1784
"""Get an id index of self._dirblocks."""
1785
if self._id_index is None:
1787
for key, tree_details in self._iter_entries():
1788
id_index.setdefault(key[2], set()).add(key)
1789
self._id_index = id_index
1790
return self._id_index
1792
def _get_output_lines(self, lines):
1793
"""Format lines for final output.
1795
:param lines: A sequence of lines containing the parents list and the
1798
output_lines = [DirState.HEADER_FORMAT_3]
1799
lines.append('') # a final newline
1800
inventory_text = '\0\n\0'.join(lines)
1801
output_lines.append('crc32: %s\n' % (zlib.crc32(inventory_text),))
1802
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
1803
num_entries = len(lines)-3
1804
output_lines.append('num_entries: %s\n' % (num_entries,))
1805
output_lines.append(inventory_text)
1808
def _make_deleted_row(self, fileid_utf8, parents):
1809
"""Return a deleted row for fileid_utf8."""
1810
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
1813
def _num_present_parents(self):
1814
"""The number of parent entries in each record row."""
1815
return len(self._parents) - len(self._ghosts)
1819
"""Construct a DirState on the file at path path.
1821
:return: An unlocked DirState object, associated with the given path.
1823
result = DirState(path)
1826
def _read_dirblocks_if_needed(self):
1827
"""Read in all the dirblocks from the file if they are not in memory.
1829
This populates self._dirblocks, and sets self._dirblock_state to
1830
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
1833
self._read_header_if_needed()
1834
if self._dirblock_state == DirState.NOT_IN_MEMORY:
1835
_read_dirblocks(self)
1837
def _read_header(self):
1838
"""This reads in the metadata header, and the parent ids.
1840
After reading in, the file should be positioned at the null
1841
just before the start of the first record in the file.
1843
:return: (expected crc checksum, number of entries, parent list)
1845
self._read_prelude()
1846
parent_line = self._state_file.readline()
1847
info = parent_line.split('\0')
1848
num_parents = int(info[0])
1849
assert num_parents == len(info)-2, 'incorrect parent info line'
1850
self._parents = info[1:-1]
1852
ghost_line = self._state_file.readline()
1853
info = ghost_line.split('\0')
1854
num_ghosts = int(info[1])
1855
assert num_ghosts == len(info)-3, 'incorrect ghost info line'
1856
self._ghosts = info[2:-1]
1857
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1858
self._end_of_header = self._state_file.tell()
1860
def _read_header_if_needed(self):
1861
"""Read the header of the dirstate file if needed."""
1862
# inline this as it will be called a lot
1863
if not self._lock_token:
1864
raise errors.ObjectNotLocked(self)
1865
if self._header_state == DirState.NOT_IN_MEMORY:
1868
def _read_prelude(self):
1869
"""Read in the prelude header of the dirstate file.
1871
This only reads in the stuff that is not connected to the crc
1872
checksum. The position will be correct to read in the rest of
1873
the file and check the checksum after this point.
1874
The next entry in the file should be the number of parents,
1875
and their ids. Followed by a newline.
1877
header = self._state_file.readline()
1878
assert header == DirState.HEADER_FORMAT_3, \
1879
'invalid header line: %r' % (header,)
1880
crc_line = self._state_file.readline()
1881
assert crc_line.startswith('crc32: '), 'missing crc32 checksum'
1882
self.crc_expected = int(crc_line[len('crc32: '):-1])
1883
num_entries_line = self._state_file.readline()
1884
assert num_entries_line.startswith('num_entries: '), 'missing num_entries line'
1885
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
1888
"""Save any pending changes created during this session.
1890
We reuse the existing file, because that prevents race conditions with
1891
file creation, and use oslocks on it to prevent concurrent modification
1892
and reads - because dirstate's incremental data aggregation is not
1893
compatible with reading a modified file, and replacing a file in use by
1894
another process is impossible on Windows.
1896
A dirstate in read only mode should be smart enough though to validate
1897
that the file has not changed, and otherwise discard its cache and
1898
start over, to allow for fine grained read lock duration, so 'status'
1899
wont block 'commit' - for example.
1901
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
1902
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
1904
grabbed_write_lock = False
1905
if self._lock_state != 'w':
1906
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
1907
# Switch over to the new lock, as the old one may be closed.
1908
# TODO: jam 20070315 We should validate the disk file has
1909
# not changed contents. Since temporary_write_lock may
1910
# not be an atomic operation.
1911
self._lock_token = new_lock
1912
self._state_file = new_lock.f
1913
if not grabbed_write_lock:
1914
# We couldn't grab a write lock, so we switch back to a read one
1917
self._state_file.seek(0)
1918
self._state_file.writelines(self.get_lines())
1919
self._state_file.truncate()
1920
self._state_file.flush()
1921
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1922
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1924
if grabbed_write_lock:
1925
self._lock_token = self._lock_token.restore_read_lock()
1926
self._state_file = self._lock_token.f
1927
# TODO: jam 20070315 We should validate the disk file has
1928
# not changed contents. Since restore_read_lock may
1929
# not be an atomic operation.
1931
def _set_data(self, parent_ids, dirblocks):
1932
"""Set the full dirstate data in memory.
1934
This is an internal function used to completely replace the objects
1935
in memory state. It puts the dirstate into state 'full-dirty'.
1937
:param parent_ids: A list of parent tree revision ids.
1938
:param dirblocks: A list containing one tuple for each directory in the
1939
tree. Each tuple contains the directory path and a list of entries
1940
found in that directory.
1942
# our memory copy is now authoritative.
1943
self._dirblocks = dirblocks
1944
self._header_state = DirState.IN_MEMORY_MODIFIED
1945
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1946
self._parents = list(parent_ids)
1947
self._id_index = None
1949
def set_path_id(self, path, new_id):
1950
"""Change the id of path to new_id in the current working tree.
1952
:param path: The path inside the tree to set - '' is the root, 'foo'
1953
is the path foo in the root.
1954
:param new_id: The new id to assign to the path. This must be a utf8
1955
file id (not unicode, and not None).
1957
assert new_id.__class__ == str, \
1958
"path_id %r is not a plain string" % (new_id,)
1959
self._read_dirblocks_if_needed()
1961
# TODO: logic not written
1962
raise NotImplementedError(self.set_path_id)
1963
# TODO: check new id is unique
1964
entry = self._get_entry(0, path_utf8=path)
1965
if entry[0][2] == new_id:
1966
# Nothing to change.
1968
# mark the old path absent, and insert a new root path
1969
self._make_absent(entry)
1970
self.update_minimal(('', '', new_id), 'd',
1971
path_utf8='', packed_stat=entry[1][0][4])
1972
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1973
if self._id_index is not None:
1974
self._id_index.setdefault(new_id, set()).add(entry[0])
1976
def set_parent_trees(self, trees, ghosts):
1977
"""Set the parent trees for the dirstate.
1979
:param trees: A list of revision_id, tree tuples. tree must be provided
1980
even if the revision_id refers to a ghost: supply an empty tree in
1982
:param ghosts: A list of the revision_ids that are ghosts at the time
1985
# TODO: generate a list of parent indexes to preserve to save
1986
# processing specific parent trees. In the common case one tree will
1987
# be preserved - the left most parent.
1988
# TODO: if the parent tree is a dirstate, we might want to walk them
1989
# all by path in parallel for 'optimal' common-case performance.
1990
# generate new root row.
1991
self._read_dirblocks_if_needed()
1992
# TODO future sketch: Examine the existing parents to generate a change
1993
# map and then walk the new parent trees only, mapping them into the
1994
# dirstate. Walk the dirstate at the same time to remove unreferenced
1997
# sketch: loop over all entries in the dirstate, cherry picking
1998
# entries from the parent trees, if they are not ghost trees.
1999
# after we finish walking the dirstate, all entries not in the dirstate
2000
# are deletes, so we want to append them to the end as per the design
2001
# discussions. So do a set difference on ids with the parents to
2002
# get deletes, and add them to the end.
2003
# During the update process we need to answer the following questions:
2004
# - find other keys containing a fileid in order to create cross-path
2005
# links. We dont't trivially use the inventory from other trees
2006
# because this leads to either double touching, or to accessing
2008
# - find other keys containing a path
2009
# We accumulate each entry via this dictionary, including the root
2012
# we could do parallel iterators, but because file id data may be
2013
# scattered throughout, we dont save on index overhead: we have to look
2014
# at everything anyway. We can probably save cycles by reusing parent
2015
# data and doing an incremental update when adding an additional
2016
# parent, but for now the common cases are adding a new parent (merge),
2017
# and replacing completely (commit), and commit is more common: so
2018
# optimise merge later.
2020
# ---- start generation of full tree mapping data
2021
# what trees should we use?
2022
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
2023
# how many trees do we end up with
2024
parent_count = len(parent_trees)
2026
# one: the current tree
2027
for entry in self._iter_entries():
2028
# skip entries not in the current tree
2029
if entry[1][0][0] in ('a', 'r'): # absent, relocated
2031
by_path[entry[0]] = [entry[1][0]] + \
2032
[DirState.NULL_PARENT_DETAILS] * parent_count
2033
id_index[entry[0][2]] = set([entry[0]])
2035
# now the parent trees:
2036
for tree_index, tree in enumerate(parent_trees):
2037
# the index is off by one, adjust it.
2038
tree_index = tree_index + 1
2039
# when we add new locations for a fileid we need these ranges for
2040
# any fileid in this tree as we set the by_path[id] to:
2041
# already_processed_tree_details + new_details + new_location_suffix
2042
# the suffix is from tree_index+1:parent_count+1.
2043
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
2044
# now stitch in all the entries from this tree
2045
for path, entry in tree.inventory.iter_entries_by_dir():
2046
# here we process each trees details for each item in the tree.
2047
# we first update any existing entries for the id at other paths,
2048
# then we either create or update the entry for the id at the
2049
# right path, and finally we add (if needed) a mapping from
2050
# file_id to this path. We do it in this order to allow us to
2051
# avoid checking all known paths for the id when generating a
2052
# new entry at this path: by adding the id->path mapping last,
2053
# all the mappings are valid and have correct relocation
2054
# records where needed.
2055
file_id = entry.file_id
2056
path_utf8 = path.encode('utf8')
2057
dirname, basename = osutils.split(path_utf8)
2058
new_entry_key = (dirname, basename, file_id)
2059
# tree index consistency: All other paths for this id in this tree
2060
# index must point to the correct path.
2061
for entry_key in id_index.setdefault(file_id, set()):
2062
# TODO:PROFILING: It might be faster to just update
2063
# rather than checking if we need to, and then overwrite
2064
# the one we are located at.
2065
if entry_key != new_entry_key:
2066
# this file id is at a different path in one of the
2067
# other trees, so put absent pointers there
2068
# This is the vertical axis in the matrix, all pointing
2070
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
2071
# by path consistency: Insert into an existing path record (trivial), or
2072
# add a new one with relocation pointers for the other tree indexes.
2073
if new_entry_key in id_index[file_id]:
2074
# there is already an entry where this data belongs, just insert it.
2075
by_path[new_entry_key][tree_index] = \
2076
self._inv_entry_to_details(entry)
2078
# add relocated entries to the horizontal axis - this row
2079
# mapping from path,id. We need to look up the correct path
2080
# for the indexes from 0 to tree_index -1
2082
for lookup_index in xrange(tree_index):
2083
# boundary case: this is the first occurence of file_id
2084
# so there are no id_indexs, possibly take this out of
2086
if not len(id_index[file_id]):
2087
new_details.append(DirState.NULL_PARENT_DETAILS)
2089
# grab any one entry, use it to find the right path.
2090
# TODO: optimise this to reduce memory use in highly
2091
# fragmented situations by reusing the relocation
2093
a_key = iter(id_index[file_id]).next()
2094
if by_path[a_key][lookup_index][0] in ('r', 'a'):
2095
# its a pointer or missing statement, use it as is.
2096
new_details.append(by_path[a_key][lookup_index])
2098
# we have the right key, make a pointer to it.
2099
real_path = ('/'.join(a_key[0:2])).strip('/')
2100
new_details.append(('r', real_path, 0, False, ''))
2101
new_details.append(self._inv_entry_to_details(entry))
2102
new_details.extend(new_location_suffix)
2103
by_path[new_entry_key] = new_details
2104
id_index[file_id].add(new_entry_key)
2105
# --- end generation of full tree mappings
2107
# sort and output all the entries
2108
new_entries = self._sort_entries(by_path.items())
2109
self._entries_to_current_state(new_entries)
2110
self._parents = [rev_id for rev_id, tree in trees]
2111
self._ghosts = list(ghosts)
2112
self._header_state = DirState.IN_MEMORY_MODIFIED
2113
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2114
self._id_index = id_index
2116
def _sort_entries(self, entry_list):
2117
"""Given a list of entries, sort them into the right order.
2119
This is done when constructing a new dirstate from trees - normally we
2120
try to keep everything in sorted blocks all the time, but sometimes
2121
it's easier to sort after the fact.
2124
# sort by: directory parts, file name, file id
2125
return entry[0][0].split('/'), entry[0][1], entry[0][2]
2126
return sorted(entry_list, key=_key)
2128
def set_state_from_inventory(self, new_inv):
2129
"""Set new_inv as the current state.
2131
This API is called by tree transform, and will usually occur with
2132
existing parent trees.
2134
:param new_inv: The inventory object to set current state from.
2136
if 'evil' in debug.debug_flags:
2137
trace.mutter_callsite(1,
2138
"set_state_from_inventory called; please mutate the tree instead")
2139
self._read_dirblocks_if_needed()
2141
# Two iterators: current data and new data, both in dirblock order.
2142
# We zip them together, which tells about entries that are new in the
2143
# inventory, or removed in the inventory, or present in both and
2146
# You might think we could just synthesize a new dirstate directly
2147
# since we're processing it in the right order. However, we need to
2148
# also consider there may be any number of parent trees and relocation
2149
# pointers, and we don't want to duplicate that here.
2150
new_iterator = new_inv.iter_entries_by_dir()
2151
# we will be modifying the dirstate, so we need a stable iterator. In
2152
# future we might write one, for now we just clone the state into a
2153
# list - which is a shallow copy.
2154
old_iterator = iter(list(self._iter_entries()))
2155
# both must have roots so this is safe:
2156
current_new = new_iterator.next()
2157
current_old = old_iterator.next()
2158
def advance(iterator):
2160
return iterator.next()
2161
except StopIteration:
2163
while current_new or current_old:
2164
# skip entries in old that are not really there
2165
if current_old and current_old[1][0][0] in ('r', 'a'):
2166
# relocated or absent
2167
current_old = advance(old_iterator)
2170
# convert new into dirblock style
2171
new_path_utf8 = current_new[0].encode('utf8')
2172
new_dirname, new_basename = osutils.split(new_path_utf8)
2173
new_id = current_new[1].file_id
2174
new_entry_key = (new_dirname, new_basename, new_id)
2175
current_new_minikind = \
2176
DirState._kind_to_minikind[current_new[1].kind]
2177
if current_new_minikind == 't':
2178
fingerprint = current_new[1].reference_revision or ''
2180
# We normally only insert or remove records, or update
2181
# them when it has significantly changed. Then we want to
2182
# erase its fingerprint. Unaffected records should
2183
# normally not be updated at all.
2186
# for safety disable variables
2187
new_path_utf8 = new_dirname = new_basename = new_id = \
2188
new_entry_key = None
2189
# 5 cases, we dont have a value that is strictly greater than everything, so
2190
# we make both end conditions explicit
2192
# old is finished: insert current_new into the state.
2193
self.update_minimal(new_entry_key, current_new_minikind,
2194
executable=current_new[1].executable,
2195
path_utf8=new_path_utf8, fingerprint=fingerprint)
2196
current_new = advance(new_iterator)
2197
elif not current_new:
2199
self._make_absent(current_old)
2200
current_old = advance(old_iterator)
2201
elif new_entry_key == current_old[0]:
2202
# same - common case
2203
# We're looking at the same path and id in both the dirstate
2204
# and inventory, so just need to update the fields in the
2205
# dirstate from the one in the inventory.
2206
# TODO: update the record if anything significant has changed.
2207
# the minimal required trigger is if the execute bit or cached
2209
if (current_old[1][0][3] != current_new[1].executable or
2210
current_old[1][0][0] != current_new_minikind):
2211
self.update_minimal(current_old[0], current_new_minikind,
2212
executable=current_new[1].executable,
2213
path_utf8=new_path_utf8, fingerprint=fingerprint)
2214
# both sides are dealt with, move on
2215
current_old = advance(old_iterator)
2216
current_new = advance(new_iterator)
2217
elif (cmp_by_dirs(new_dirname, current_old[0][0]) < 0
2218
or (new_dirname == current_old[0][0]
2219
and new_entry_key[1:] < current_old[0][1:])):
2221
# add a entry for this and advance new
2222
self.update_minimal(new_entry_key, current_new_minikind,
2223
executable=current_new[1].executable,
2224
path_utf8=new_path_utf8, fingerprint=fingerprint)
2225
current_new = advance(new_iterator)
2227
# we've advanced past the place where the old key would be,
2228
# without seeing it in the new list. so it must be gone.
2229
self._make_absent(current_old)
2230
current_old = advance(old_iterator)
2231
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2232
self._id_index = None
2234
def _make_absent(self, current_old):
2235
"""Mark current_old - an entry - as absent for tree 0.
2237
:return: True if this was the last details entry for the entry key:
2238
that is, if the underlying block has had the entry removed, thus
2239
shrinking in length.
2241
# build up paths that this id will be left at after the change is made,
2242
# so we can update their cross references in tree 0
2243
all_remaining_keys = set()
2244
# Dont check the working tree, because it's going.
2245
for details in current_old[1][1:]:
2246
if details[0] not in ('a', 'r'): # absent, relocated
2247
all_remaining_keys.add(current_old[0])
2248
elif details[0] == 'r': # relocated
2249
# record the key for the real path.
2250
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
2251
# absent rows are not present at any path.
2252
last_reference = current_old[0] not in all_remaining_keys
2254
# the current row consists entire of the current item (being marked
2255
# absent), and relocated or absent entries for the other trees:
2256
# Remove it, its meaningless.
2257
block = self._find_block(current_old[0])
2258
entry_index, present = self._find_entry_index(current_old[0], block[1])
2259
assert present, 'could not find entry for %s' % (current_old,)
2260
block[1].pop(entry_index)
2261
# if we have an id_index in use, remove this key from it for this id.
2262
if self._id_index is not None:
2263
self._id_index[current_old[0][2]].remove(current_old[0])
2264
# update all remaining keys for this id to record it as absent. The
2265
# existing details may either be the record we are making as deleted
2266
# (if there were other trees with the id present at this path), or may
2268
for update_key in all_remaining_keys:
2269
update_block_index, present = \
2270
self._find_block_index_from_key(update_key)
2271
assert present, 'could not find block for %s' % (update_key,)
2272
update_entry_index, present = \
2273
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
2274
assert present, 'could not find entry for %s' % (update_key,)
2275
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
2276
# it must not be absent at the moment
2277
assert update_tree_details[0][0] != 'a' # absent
2278
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
2279
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2280
return last_reference
2282
def update_minimal(self, key, minikind, executable=False, fingerprint='',
2283
packed_stat=None, size=0, path_utf8=None):
2284
"""Update an entry to the state in tree 0.
2286
This will either create a new entry at 'key' or update an existing one.
2287
It also makes sure that any other records which might mention this are
2290
:param key: (dir, name, file_id) for the new entry
2291
:param minikind: The type for the entry ('f' == 'file', 'd' ==
2293
:param executable: Should the executable bit be set?
2294
:param fingerprint: Simple fingerprint for new entry: sha1 for files,
2295
referenced revision id for subtrees, etc.
2296
:param packed_stat: Packed stat value for new entry.
2297
:param size: Size information for new entry
2298
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
2301
If packed_stat and fingerprint are not given, they're invalidated in
2304
block = self._find_block(key)[1]
2305
if packed_stat is None:
2306
packed_stat = DirState.NULLSTAT
2307
# XXX: Some callers pass '' as the packed_stat, and it seems to be
2308
# sometimes present in the dirstate - this seems oddly inconsistent.
2310
entry_index, present = self._find_entry_index(key, block)
2311
new_details = (minikind, fingerprint, size, executable, packed_stat)
2312
id_index = self._get_id_index()
2314
# new entry, synthesis cross reference here,
2315
existing_keys = id_index.setdefault(key[2], set())
2316
if not existing_keys:
2317
# not currently in the state, simplest case
2318
new_entry = key, [new_details] + self._empty_parent_info()
2320
# present at one or more existing other paths.
2321
# grab one of them and use it to generate parent
2322
# relocation/absent entries.
2323
new_entry = key, [new_details]
2324
for other_key in existing_keys:
2325
# change the record at other to be a pointer to this new
2326
# record. The loop looks similar to the change to
2327
# relocations when updating an existing record but its not:
2328
# the test for existing kinds is different: this can be
2329
# factored out to a helper though.
2330
other_block_index, present = self._find_block_index_from_key(other_key)
2331
assert present, 'could not find block for %s' % (other_key,)
2332
other_entry_index, present = self._find_entry_index(other_key,
2333
self._dirblocks[other_block_index][1])
2334
assert present, 'could not find entry for %s' % (other_key,)
2335
assert path_utf8 is not None
2336
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
2337
('r', path_utf8, 0, False, '')
2339
num_present_parents = self._num_present_parents()
2340
for lookup_index in xrange(1, num_present_parents + 1):
2341
# grab any one entry, use it to find the right path.
2342
# TODO: optimise this to reduce memory use in highly
2343
# fragmented situations by reusing the relocation
2345
update_block_index, present = \
2346
self._find_block_index_from_key(other_key)
2347
assert present, 'could not find block for %s' % (other_key,)
2348
update_entry_index, present = \
2349
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
2350
assert present, 'could not find entry for %s' % (other_key,)
2351
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
2352
if update_details[0] in ('r', 'a'): # relocated, absent
2353
# its a pointer or absent in lookup_index's tree, use
2355
new_entry[1].append(update_details)
2357
# we have the right key, make a pointer to it.
2358
pointer_path = osutils.pathjoin(*other_key[0:2])
2359
new_entry[1].append(('r', pointer_path, 0, False, ''))
2360
block.insert(entry_index, new_entry)
2361
existing_keys.add(key)
2363
# Does the new state matter?
2364
block[entry_index][1][0] = new_details
2365
# parents cannot be affected by what we do.
2366
# other occurences of this id can be found
2367
# from the id index.
2369
# tree index consistency: All other paths for this id in this tree
2370
# index must point to the correct path. We have to loop here because
2371
# we may have passed entries in the state with this file id already
2372
# that were absent - where parent entries are - and they need to be
2373
# converted to relocated.
2374
assert path_utf8 is not None
2375
for entry_key in id_index.setdefault(key[2], set()):
2376
# TODO:PROFILING: It might be faster to just update
2377
# rather than checking if we need to, and then overwrite
2378
# the one we are located at.
2379
if entry_key != key:
2380
# this file id is at a different path in one of the
2381
# other trees, so put absent pointers there
2382
# This is the vertical axis in the matrix, all pointing
2384
block_index, present = self._find_block_index_from_key(entry_key)
2386
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
2388
self._dirblocks[block_index][1][entry_index][1][0] = \
2389
('r', path_utf8, 0, False, '')
2390
# add a containing dirblock if needed.
2391
if new_details[0] == 'd':
2392
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
2393
block_index, present = self._find_block_index_from_key(subdir_key)
2395
self._dirblocks.insert(block_index, (subdir_key[0], []))
2397
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2399
def _validate(self):
2400
"""Check that invariants on the dirblock are correct.
2402
This can be useful in debugging; it shouldn't be necessary in
2405
This must be called with a lock held.
2407
# NOTE: This must always raise AssertionError not just assert,
2408
# otherwise it may not behave properly under python -O
2410
# TODO: All entries must have some content that's not 'a' or 'r',
2411
# otherwise it could just be removed.
2413
# TODO: All relocations must point directly to a real entry.
2415
# TODO: No repeated keys.
2418
from pprint import pformat
2419
self._read_dirblocks_if_needed()
2420
if len(self._dirblocks) > 0:
2421
if not self._dirblocks[0][0] == '':
2422
raise AssertionError(
2423
"dirblocks don't start with root block:\n" + \
2425
if len(self._dirblocks) > 1:
2426
if not self._dirblocks[1][0] == '':
2427
raise AssertionError(
2428
"dirblocks missing root directory:\n" + \
2430
# the dirblocks are sorted by their path components, name, and dir id
2431
dir_names = [d[0].split('/')
2432
for d in self._dirblocks[1:]]
2433
if dir_names != sorted(dir_names):
2434
raise AssertionError(
2435
"dir names are not in sorted order:\n" + \
2436
pformat(self._dirblocks) + \
2439
for dirblock in self._dirblocks:
2440
# within each dirblock, the entries are sorted by filename and
2442
for entry in dirblock[1]:
2443
if dirblock[0] != entry[0][0]:
2444
raise AssertionError(
2446
"doesn't match directory name in\n%r" %
2447
(entry, pformat(dirblock)))
2448
if dirblock[1] != sorted(dirblock[1]):
2449
raise AssertionError(
2450
"dirblock for %r is not sorted:\n%s" % \
2451
(dirblock[0], pformat(dirblock)))
2454
def check_valid_parent():
2455
"""Check that the current entry has a valid parent.
2457
This makes sure that the parent has a record,
2458
and that the parent isn't marked as "absent" in the
2459
current tree. (It is invalid to have a non-absent file in an absent
2462
if entry[0][0:2] == ('', ''):
2463
# There should be no parent for the root row
2465
parent_entry = self._get_entry(tree_index, path_utf8=entry[0][0])
2466
if parent_entry == (None, None):
2467
raise AssertionError(
2468
"no parent entry for: %s in tree %s"
2469
% (this_path, tree_index))
2470
if parent_entry[1][tree_index][0] != 'd':
2471
raise AssertionError(
2472
"Parent entry for %s is not marked as a valid"
2473
" directory. %s" % (this_path, parent_entry,))
2475
# For each file id, for each tree: either
2476
# the file id is not present at all; all rows with that id in the
2477
# key have it marked as 'absent'
2478
# OR the file id is present under exactly one name; any other entries
2479
# that mention that id point to the correct name.
2481
# We check this with a dict per tree pointing either to the present
2482
# name, or None if absent.
2483
tree_count = self._num_present_parents() + 1
2484
id_path_maps = [dict() for i in range(tree_count)]
2485
# Make sure that all renamed entries point to the correct location.
2486
for entry in self._iter_entries():
2487
file_id = entry[0][2]
2488
this_path = osutils.pathjoin(entry[0][0], entry[0][1])
2489
if len(entry[1]) != tree_count:
2490
raise AssertionError(
2491
"wrong number of entry details for row\n%s" \
2492
",\nexpected %d" % \
2493
(pformat(entry), tree_count))
2494
for tree_index, tree_state in enumerate(entry[1]):
2495
this_tree_map = id_path_maps[tree_index]
2496
minikind = tree_state[0]
2497
# have we seen this id before in this column?
2498
if file_id in this_tree_map:
2499
previous_path = this_tree_map[file_id]
2500
# any later mention of this file must be consistent with
2501
# what was said before
2503
if previous_path is not None:
2504
raise AssertionError(
2505
"file %s is absent in row %r but also present " \
2507
(file_id, entry, previous_path))
2508
elif minikind == 'r':
2509
target_location = tree_state[1]
2510
if previous_path != target_location:
2511
raise AssertionError(
2512
"file %s relocation in row %r but also at %r" \
2513
% (file_id, entry, previous_path))
2515
# a file, directory, etc - may have been previously
2516
# pointed to by a relocation, which must point here
2517
if previous_path != this_path:
2518
raise AssertionError(
2519
"entry %r inconsistent with previous path %r" % \
2520
(entry, previous_path))
2521
check_valid_parent()
2524
# absent; should not occur anywhere else
2525
this_tree_map[file_id] = None
2526
elif minikind == 'r':
2527
# relocation, must occur at expected location
2528
this_tree_map[file_id] = tree_state[1]
2530
this_tree_map[file_id] = this_path
2531
check_valid_parent()
2533
def _wipe_state(self):
2534
"""Forget all state information about the dirstate."""
2535
self._header_state = DirState.NOT_IN_MEMORY
2536
self._dirblock_state = DirState.NOT_IN_MEMORY
2539
self._dirblocks = []
2540
self._id_index = None
2541
self._end_of_header = None
2542
self._cutoff_time = None
2543
self._split_path_cache = {}
2545
def lock_read(self):
2546
"""Acquire a read lock on the dirstate."""
2547
if self._lock_token is not None:
2548
raise errors.LockContention(self._lock_token)
2549
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2550
# already in memory, we could read just the header and check for
2551
# any modification. If not modified, we can just leave things
2553
self._lock_token = lock.ReadLock(self._filename)
2554
self._lock_state = 'r'
2555
self._state_file = self._lock_token.f
2558
def lock_write(self):
2559
"""Acquire a write lock on the dirstate."""
2560
if self._lock_token is not None:
2561
raise errors.LockContention(self._lock_token)
2562
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2563
# already in memory, we could read just the header and check for
2564
# any modification. If not modified, we can just leave things
2566
self._lock_token = lock.WriteLock(self._filename)
2567
self._lock_state = 'w'
2568
self._state_file = self._lock_token.f
2572
"""Drop any locks held on the dirstate."""
2573
if self._lock_token is None:
2574
raise errors.LockNotHeld(self)
2575
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2576
# already in memory, we could read just the header and check for
2577
# any modification. If not modified, we can just leave things
2579
self._state_file = None
2580
self._lock_state = None
2581
self._lock_token.unlock()
2582
self._lock_token = None
2583
self._split_path_cache = {}
2585
def _requires_lock(self):
2586
"""Check that a lock is currently held by someone on the dirstate."""
2587
if not self._lock_token:
2588
raise errors.ObjectNotLocked(self)
2591
# Try to load the compiled form if possible
2593
from bzrlib._dirstate_helpers_c import (
2594
_read_dirblocks_c as _read_dirblocks,
2595
bisect_dirblock_c as bisect_dirblock,
2596
_bisect_path_left_c as _bisect_path_left,
2597
_bisect_path_right_c as _bisect_path_right,
2598
cmp_by_dirs_c as cmp_by_dirs,
2601
from bzrlib._dirstate_helpers_py import (
2602
_read_dirblocks_py as _read_dirblocks,
2603
bisect_dirblock_py as bisect_dirblock,
2604
_bisect_path_left_py as _bisect_path_left,
2605
_bisect_path_right_py as _bisect_path_right,
2606
cmp_by_dirs_py as cmp_by_dirs,