1
# Copyright (C) 2006, 2007, 2008 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 3", NL;
33
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", WHOLE_NUMBER, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but its a file. The fingerprint is a
87
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
89
't' is a reference to a nested subtree; the fingerprint is the referenced
94
The entries on disk and in memory are ordered according to the following keys:
96
directory, as a list of components
100
--- Format 1 had the following different definition: ---
101
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
102
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
104
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
105
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
108
PARENT ROW's are emitted for every parent that is not in the ghosts details
109
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
110
each row will have a PARENT ROW for foo and baz, but not for bar.
113
In any tree, a kind of 'moved' indicates that the fingerprint field
114
(which we treat as opaque data specific to the 'kind' anyway) has the
115
details for the id of this row in that tree.
117
I'm strongly tempted to add a id->path index as well, but I think that
118
where we need id->path mapping; we also usually read the whole file, so
119
I'm going to skip that for the moment, as we have the ability to locate
120
via bisect any path in any tree, and if we lookup things by path, we can
121
accumulate an id->path mapping as we go, which will tend to match what we
124
I plan to implement this asap, so please speak up now to alter/tweak the
125
design - and once we stabilise on this, I'll update the wiki page for
128
The rationale for all this is that we want fast operations for the
129
common case (diff/status/commit/merge on all files) and extremely fast
130
operations for the less common but still occurs a lot status/diff/commit
131
on specific files). Operations on specific files involve a scan for all
132
the children of a path, *in every involved tree*, which the current
133
format did not accommodate.
137
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
138
2) fall back current object model as needed.
139
3) scale usably to the largest trees known today - say 50K entries. (mozilla
140
is an example of this)
144
Eventually reuse dirstate objects across locks IFF the dirstate file has not
145
been modified, but will require that we flush/ignore cached stat-hit data
146
because we won't want to restat all files on disk just because a lock was
147
acquired, yet we cannot trust the data after the previous lock was released.
149
Memory representation:
150
vector of all directories, and vector of the childen ?
152
root_entrie = (direntry for root, [parent_direntries_for_root]),
154
('', ['data for achild', 'data for bchild', 'data for cchild'])
155
('dir', ['achild', 'cchild', 'echild'])
157
- single bisect to find N subtrees from a path spec
158
- in-order for serialisation - this is 'dirblock' grouping.
159
- insertion of a file '/a' affects only the '/' child-vector, that is, to
160
insert 10K elements from scratch does not generates O(N^2) memoves of a
161
single vector, rather each individual, which tends to be limited to a
162
manageable number. Will scale badly on trees with 10K entries in a
163
single directory. compare with Inventory.InventoryDirectory which has
164
a dictionary for the children. No bisect capability, can only probe for
165
exact matches, or grab all elements and sort.
166
- What's the risk of error here? Once we have the base format being processed
167
we should have a net win regardless of optimality. So we are going to
168
go with what seems reasonable.
171
Maybe we should do a test profile of the core structure - 10K simulated
172
searches/lookups/etc?
174
Objects for each row?
175
The lifetime of Dirstate objects is current per lock, but see above for
176
possible extensions. The lifetime of a row from a dirstate is expected to be
177
very short in the optimistic case: which we are optimising for. For instance,
178
subtree status will determine from analysis of the disk data what rows need to
179
be examined at all, and will be able to determine from a single row whether
180
that file has altered or not, so we are aiming to process tens of thousands of
181
entries each second within the dirstate context, before exposing anything to
182
the larger codebase. This suggests we want the time for a single file
183
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
184
processed, and to scale to 100 thousand we'll another order of magnitude to do
185
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
186
the file on disk, and then immediately discard, the overhead of object creation
187
becomes a significant cost.
189
Figures: Creating a tuple from from 3 elements was profiled at 0.0625
190
microseconds, whereas creating a object which is subclassed from tuple was
191
0.500 microseconds, and creating an object with 3 elements and slots was 3
192
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
193
down to 10 microseconds for the total processing - having 33% of that be object
194
creation is a huge overhead. There is a potential cost in using tuples within
195
each row which is that the conditional code to do comparisons may be slower
196
than method invocation, but method invocation is known to be slow due to stack
197
frame creation, so avoiding methods in these tight inner loops in unfortunately
198
desirable. We can consider a pyrex version of this with objects in future if
207
from stat import S_IEXEC
225
def pack_stat(st, _encode=binascii.b2a_base64, _pack=struct.pack):
226
"""Convert stat values into a packed representation."""
227
# jam 20060614 it isn't really worth removing more entries if we
228
# are going to leave it in packed form.
229
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
230
# With all entries, filesize is 5.9M and read time is maybe 280ms
231
# well within the noise margin
233
# base64 encoding always adds a final newline, so strip it off
234
# The current version
235
return _encode(_pack('>LLLLLL'
236
, st.st_size, int(st.st_mtime), int(st.st_ctime)
237
, st.st_dev, st.st_ino & 0xFFFFFFFF, st.st_mode))[:-1]
238
# This is 0.060s / 1.520s faster by not encoding as much information
239
# return _encode(_pack('>LL', int(st.st_mtime), st.st_mode))[:-1]
240
# This is not strictly faster than _encode(_pack())[:-1]
241
# return '%X.%X.%X.%X.%X.%X' % (
242
# st.st_size, int(st.st_mtime), int(st.st_ctime),
243
# st.st_dev, st.st_ino, st.st_mode)
244
# Similar to the _encode(_pack('>LL'))
245
# return '%X.%X' % (int(st.st_mtime), st.st_mode)
248
class DirState(object):
249
"""Record directory and metadata state for fast access.
251
A dirstate is a specialised data structure for managing local working
252
tree state information. Its not yet well defined whether it is platform
253
specific, and if it is how we detect/parameterize that.
255
Dirstates use the usual lock_write, lock_read and unlock mechanisms.
256
Unlike most bzr disk formats, DirStates must be locked for reading, using
257
lock_read. (This is an os file lock internally.) This is necessary
258
because the file can be rewritten in place.
260
DirStates must be explicitly written with save() to commit changes; just
261
unlocking them does not write the changes to disk.
264
_kind_to_minikind = {
270
'tree-reference': 't',
272
_minikind_to_kind = {
278
't': 'tree-reference',
280
_stat_to_minikind = {
285
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
286
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
289
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
290
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
291
# that we are sure a single record will always fit.
292
BISECT_PAGE_SIZE = 4096
295
IN_MEMORY_UNMODIFIED = 1
296
IN_MEMORY_MODIFIED = 2
298
# A pack_stat (the x's) that is just noise and will never match the output
301
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
303
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
304
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
306
def __init__(self, path):
307
"""Create a DirState object.
309
:param path: The path at which the dirstate file on disk should live.
311
# _header_state and _dirblock_state represent the current state
312
# of the dirstate metadata and the per-row data respectiely.
313
# NOT_IN_MEMORY indicates that no data is in memory
314
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
315
# is the same as is on disk
316
# IN_MEMORY_MODIFIED indicates that we have a modified version
317
# of what is on disk.
318
# In future we will add more granularity, for instance _dirblock_state
319
# will probably support partially-in-memory as a separate variable,
320
# allowing for partially-in-memory unmodified and partially-in-memory
322
self._header_state = DirState.NOT_IN_MEMORY
323
self._dirblock_state = DirState.NOT_IN_MEMORY
324
# If true, an error has been detected while updating the dirstate, and
325
# for safety we're not going to commit to disk.
326
self._changes_aborted = False
330
self._state_file = None
331
self._filename = path
332
self._lock_token = None
333
self._lock_state = None
334
self._id_index = None
335
# a map from packed_stat to sha's.
336
self._packed_stat_index = None
337
self._end_of_header = None
338
self._cutoff_time = None
339
self._split_path_cache = {}
340
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
341
if 'hashcache' in debug.debug_flags:
342
self._sha1_file = self._sha1_file_and_mutter
344
self._sha1_file = osutils.sha_file_by_name
345
# These two attributes provide a simple cache for lookups into the
346
# dirstate in-memory vectors. By probing respectively for the last
347
# block, and for the next entry, we save nearly 2 bisections per path
349
self._last_block_index = None
350
self._last_entry_index = None
354
(self.__class__.__name__, self._filename)
356
def add(self, path, file_id, kind, stat, fingerprint):
357
"""Add a path to be tracked.
359
:param path: The path within the dirstate - '' is the root, 'foo' is the
360
path foo within the root, 'foo/bar' is the path bar within foo
362
:param file_id: The file id of the path being added.
363
:param kind: The kind of the path, as a string like 'file',
365
:param stat: The output of os.lstat for the path.
366
:param fingerprint: The sha value of the file,
367
or the target of a symlink,
368
or the referenced revision id for tree-references,
369
or '' for directories.
372
# find the block its in.
373
# find the location in the block.
374
# check its not there
376
#------- copied from inventory.ensure_normalized_name - keep synced.
377
# --- normalized_filename wants a unicode basename only, so get one.
378
dirname, basename = osutils.split(path)
379
# we dont import normalized_filename directly because we want to be
380
# able to change the implementation at runtime for tests.
381
norm_name, can_access = osutils.normalized_filename(basename)
382
if norm_name != basename:
386
raise errors.InvalidNormalization(path)
387
# you should never have files called . or ..; just add the directory
388
# in the parent, or according to the special treatment for the root
389
if basename == '.' or basename == '..':
390
raise errors.InvalidEntryName(path)
391
# now that we've normalised, we need the correct utf8 path and
392
# dirname and basename elements. This single encode and split should be
393
# faster than three separate encodes.
394
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
395
dirname, basename = osutils.split(utf8path)
396
if file_id.__class__ != str:
397
raise AssertionError(
398
"must be a utf8 file_id not %s" % (type(file_id), ))
399
# Make sure the file_id does not exist in this tree
400
file_id_entry = self._get_entry(0, fileid_utf8=file_id)
401
if file_id_entry != (None, None):
402
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
403
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
404
info = '%s:%s' % (kind, path)
405
raise errors.DuplicateFileId(file_id, info)
406
first_key = (dirname, basename, '')
407
block_index, present = self._find_block_index_from_key(first_key)
409
# check the path is not in the tree
410
block = self._dirblocks[block_index][1]
411
entry_index, _ = self._find_entry_index(first_key, block)
412
while (entry_index < len(block) and
413
block[entry_index][0][0:2] == first_key[0:2]):
414
if block[entry_index][1][0][0] not in 'ar':
415
# this path is in the dirstate in the current tree.
416
raise Exception, "adding already added path!"
419
# The block where we want to put the file is not present. But it
420
# might be because the directory was empty, or not loaded yet. Look
421
# for a parent entry, if not found, raise NotVersionedError
422
parent_dir, parent_base = osutils.split(dirname)
423
parent_block_idx, parent_entry_idx, _, parent_present = \
424
self._get_block_entry_index(parent_dir, parent_base, 0)
425
if not parent_present:
426
raise errors.NotVersionedError(path, str(self))
427
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
428
block = self._dirblocks[block_index][1]
429
entry_key = (dirname, basename, file_id)
432
packed_stat = DirState.NULLSTAT
435
packed_stat = pack_stat(stat)
436
parent_info = self._empty_parent_info()
437
minikind = DirState._kind_to_minikind[kind]
439
entry_data = entry_key, [
440
(minikind, fingerprint, size, False, packed_stat),
442
elif kind == 'directory':
443
entry_data = entry_key, [
444
(minikind, '', 0, False, packed_stat),
446
elif kind == 'symlink':
447
entry_data = entry_key, [
448
(minikind, fingerprint, size, False, packed_stat),
450
elif kind == 'tree-reference':
451
entry_data = entry_key, [
452
(minikind, fingerprint, 0, False, packed_stat),
455
raise errors.BzrError('unknown kind %r' % kind)
456
entry_index, present = self._find_entry_index(entry_key, block)
458
block.insert(entry_index, entry_data)
460
if block[entry_index][1][0][0] != 'a':
461
raise AssertionError(" %r(%r) already added" % (basename, file_id))
462
block[entry_index][1][0] = entry_data[1][0]
464
if kind == 'directory':
465
# insert a new dirblock
466
self._ensure_block(block_index, entry_index, utf8path)
467
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
469
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
471
def _bisect(self, paths):
472
"""Bisect through the disk structure for specific rows.
474
:param paths: A list of paths to find
475
:return: A dict mapping path => entries for found entries. Missing
476
entries will not be in the map.
477
The list is not sorted, and entries will be populated
478
based on when they were read.
480
self._requires_lock()
481
# We need the file pointer to be right after the initial header block
482
self._read_header_if_needed()
483
# If _dirblock_state was in memory, we should just return info from
484
# there, this function is only meant to handle when we want to read
486
if self._dirblock_state != DirState.NOT_IN_MEMORY:
487
raise AssertionError("bad dirblock state %r" % self._dirblock_state)
489
# The disk representation is generally info + '\0\n\0' at the end. But
490
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
491
# Because it means we can sync on the '\n'
492
state_file = self._state_file
493
file_size = os.fstat(state_file.fileno()).st_size
494
# We end up with 2 extra fields, we should have a trailing '\n' to
495
# ensure that we read the whole record, and we should have a precursur
496
# '' which ensures that we start after the previous '\n'
497
entry_field_count = self._fields_per_entry() + 1
499
low = self._end_of_header
500
high = file_size - 1 # Ignore the final '\0'
501
# Map from (dir, name) => entry
504
# Avoid infinite seeking
505
max_count = 30*len(paths)
507
# pending is a list of places to look.
508
# each entry is a tuple of low, high, dir_names
509
# low -> the first byte offset to read (inclusive)
510
# high -> the last byte offset (inclusive)
511
# dir_names -> The list of (dir, name) pairs that should be found in
512
# the [low, high] range
513
pending = [(low, high, paths)]
515
page_size = self._bisect_page_size
517
fields_to_entry = self._get_fields_to_entry()
520
low, high, cur_files = pending.pop()
522
if not cur_files or low >= high:
527
if count > max_count:
528
raise errors.BzrError('Too many seeks, most likely a bug.')
530
mid = max(low, (low+high-page_size)/2)
533
# limit the read size, so we don't end up reading data that we have
535
read_size = min(page_size, (high-mid)+1)
536
block = state_file.read(read_size)
539
entries = block.split('\n')
542
# We didn't find a '\n', so we cannot have found any records.
543
# So put this range back and try again. But we know we have to
544
# increase the page size, because a single read did not contain
545
# a record break (so records must be larger than page_size)
547
pending.append((low, high, cur_files))
550
# Check the first and last entries, in case they are partial, or if
551
# we don't care about the rest of this page
553
first_fields = entries[0].split('\0')
554
if len(first_fields) < entry_field_count:
555
# We didn't get the complete first entry
556
# so move start, and grab the next, which
557
# should be a full entry
558
start += len(entries[0])+1
559
first_fields = entries[1].split('\0')
562
if len(first_fields) <= 2:
563
# We didn't even get a filename here... what do we do?
564
# Try a large page size and repeat this query
566
pending.append((low, high, cur_files))
569
# Find what entries we are looking for, which occur before and
570
# after this first record.
573
first_path = first_fields[1] + '/' + first_fields[2]
575
first_path = first_fields[2]
576
first_loc = _bisect_path_left(cur_files, first_path)
578
# These exist before the current location
579
pre = cur_files[:first_loc]
580
# These occur after the current location, which may be in the
581
# data we read, or might be after the last entry
582
post = cur_files[first_loc:]
584
if post and len(first_fields) >= entry_field_count:
585
# We have files after the first entry
587
# Parse the last entry
588
last_entry_num = len(entries)-1
589
last_fields = entries[last_entry_num].split('\0')
590
if len(last_fields) < entry_field_count:
591
# The very last hunk was not complete,
592
# read the previous hunk
593
after = mid + len(block) - len(entries[-1])
595
last_fields = entries[last_entry_num].split('\0')
597
after = mid + len(block)
600
last_path = last_fields[1] + '/' + last_fields[2]
602
last_path = last_fields[2]
603
last_loc = _bisect_path_right(post, last_path)
605
middle_files = post[:last_loc]
606
post = post[last_loc:]
609
# We have files that should occur in this block
610
# (>= first, <= last)
611
# Either we will find them here, or we can mark them as
614
if middle_files[0] == first_path:
615
# We might need to go before this location
616
pre.append(first_path)
617
if middle_files[-1] == last_path:
618
post.insert(0, last_path)
620
# Find out what paths we have
621
paths = {first_path:[first_fields]}
622
# last_path might == first_path so we need to be
623
# careful if we should append rather than overwrite
624
if last_entry_num != first_entry_num:
625
paths.setdefault(last_path, []).append(last_fields)
626
for num in xrange(first_entry_num+1, last_entry_num):
627
# TODO: jam 20070223 We are already splitting here, so
628
# shouldn't we just split the whole thing rather
629
# than doing the split again in add_one_record?
630
fields = entries[num].split('\0')
632
path = fields[1] + '/' + fields[2]
635
paths.setdefault(path, []).append(fields)
637
for path in middle_files:
638
for fields in paths.get(path, []):
639
# offset by 1 because of the opening '\0'
640
# consider changing fields_to_entry to avoid the
642
entry = fields_to_entry(fields[1:])
643
found.setdefault(path, []).append(entry)
645
# Now we have split up everything into pre, middle, and post, and
646
# we have handled everything that fell in 'middle'.
647
# We add 'post' first, so that we prefer to seek towards the
648
# beginning, so that we will tend to go as early as we need, and
649
# then only seek forward after that.
651
pending.append((after, high, post))
653
pending.append((low, start-1, pre))
655
# Consider that we may want to return the directory entries in sorted
656
# order. For now, we just return them in whatever order we found them,
657
# and leave it up to the caller if they care if it is ordered or not.
660
def _bisect_dirblocks(self, dir_list):
661
"""Bisect through the disk structure to find entries in given dirs.
663
_bisect_dirblocks is meant to find the contents of directories, which
664
differs from _bisect, which only finds individual entries.
666
:param dir_list: A sorted list of directory names ['', 'dir', 'foo'].
667
:return: A map from dir => entries_for_dir
669
# TODO: jam 20070223 A lot of the bisecting logic could be shared
670
# between this and _bisect. It would require parameterizing the
671
# inner loop with a function, though. We should evaluate the
672
# performance difference.
673
self._requires_lock()
674
# We need the file pointer to be right after the initial header block
675
self._read_header_if_needed()
676
# If _dirblock_state was in memory, we should just return info from
677
# there, this function is only meant to handle when we want to read
679
if self._dirblock_state != DirState.NOT_IN_MEMORY:
680
raise AssertionError("bad dirblock state %r" % self._dirblock_state)
681
# The disk representation is generally info + '\0\n\0' at the end. But
682
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
683
# Because it means we can sync on the '\n'
684
state_file = self._state_file
685
file_size = os.fstat(state_file.fileno()).st_size
686
# We end up with 2 extra fields, we should have a trailing '\n' to
687
# ensure that we read the whole record, and we should have a precursur
688
# '' which ensures that we start after the previous '\n'
689
entry_field_count = self._fields_per_entry() + 1
691
low = self._end_of_header
692
high = file_size - 1 # Ignore the final '\0'
693
# Map from dir => entry
696
# Avoid infinite seeking
697
max_count = 30*len(dir_list)
699
# pending is a list of places to look.
700
# each entry is a tuple of low, high, dir_names
701
# low -> the first byte offset to read (inclusive)
702
# high -> the last byte offset (inclusive)
703
# dirs -> The list of directories that should be found in
704
# the [low, high] range
705
pending = [(low, high, dir_list)]
707
page_size = self._bisect_page_size
709
fields_to_entry = self._get_fields_to_entry()
712
low, high, cur_dirs = pending.pop()
714
if not cur_dirs or low >= high:
719
if count > max_count:
720
raise errors.BzrError('Too many seeks, most likely a bug.')
722
mid = max(low, (low+high-page_size)/2)
725
# limit the read size, so we don't end up reading data that we have
727
read_size = min(page_size, (high-mid)+1)
728
block = state_file.read(read_size)
731
entries = block.split('\n')
734
# We didn't find a '\n', so we cannot have found any records.
735
# So put this range back and try again. But we know we have to
736
# increase the page size, because a single read did not contain
737
# a record break (so records must be larger than page_size)
739
pending.append((low, high, cur_dirs))
742
# Check the first and last entries, in case they are partial, or if
743
# we don't care about the rest of this page
745
first_fields = entries[0].split('\0')
746
if len(first_fields) < entry_field_count:
747
# We didn't get the complete first entry
748
# so move start, and grab the next, which
749
# should be a full entry
750
start += len(entries[0])+1
751
first_fields = entries[1].split('\0')
754
if len(first_fields) <= 1:
755
# We didn't even get a dirname here... what do we do?
756
# Try a large page size and repeat this query
758
pending.append((low, high, cur_dirs))
761
# Find what entries we are looking for, which occur before and
762
# after this first record.
764
first_dir = first_fields[1]
765
first_loc = bisect.bisect_left(cur_dirs, first_dir)
767
# These exist before the current location
768
pre = cur_dirs[:first_loc]
769
# These occur after the current location, which may be in the
770
# data we read, or might be after the last entry
771
post = cur_dirs[first_loc:]
773
if post and len(first_fields) >= entry_field_count:
774
# We have records to look at after the first entry
776
# Parse the last entry
777
last_entry_num = len(entries)-1
778
last_fields = entries[last_entry_num].split('\0')
779
if len(last_fields) < entry_field_count:
780
# The very last hunk was not complete,
781
# read the previous hunk
782
after = mid + len(block) - len(entries[-1])
784
last_fields = entries[last_entry_num].split('\0')
786
after = mid + len(block)
788
last_dir = last_fields[1]
789
last_loc = bisect.bisect_right(post, last_dir)
791
middle_files = post[:last_loc]
792
post = post[last_loc:]
795
# We have files that should occur in this block
796
# (>= first, <= last)
797
# Either we will find them here, or we can mark them as
800
if middle_files[0] == first_dir:
801
# We might need to go before this location
802
pre.append(first_dir)
803
if middle_files[-1] == last_dir:
804
post.insert(0, last_dir)
806
# Find out what paths we have
807
paths = {first_dir:[first_fields]}
808
# last_dir might == first_dir so we need to be
809
# careful if we should append rather than overwrite
810
if last_entry_num != first_entry_num:
811
paths.setdefault(last_dir, []).append(last_fields)
812
for num in xrange(first_entry_num+1, last_entry_num):
813
# TODO: jam 20070223 We are already splitting here, so
814
# shouldn't we just split the whole thing rather
815
# than doing the split again in add_one_record?
816
fields = entries[num].split('\0')
817
paths.setdefault(fields[1], []).append(fields)
819
for cur_dir in middle_files:
820
for fields in paths.get(cur_dir, []):
821
# offset by 1 because of the opening '\0'
822
# consider changing fields_to_entry to avoid the
824
entry = fields_to_entry(fields[1:])
825
found.setdefault(cur_dir, []).append(entry)
827
# Now we have split up everything into pre, middle, and post, and
828
# we have handled everything that fell in 'middle'.
829
# We add 'post' first, so that we prefer to seek towards the
830
# beginning, so that we will tend to go as early as we need, and
831
# then only seek forward after that.
833
pending.append((after, high, post))
835
pending.append((low, start-1, pre))
839
def _bisect_recursive(self, paths):
840
"""Bisect for entries for all paths and their children.
842
This will use bisect to find all records for the supplied paths. It
843
will then continue to bisect for any records which are marked as
844
directories. (and renames?)
846
:param paths: A sorted list of (dir, name) pairs
847
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
848
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
850
# Map from (dir, name, file_id) => [tree_info]
853
found_dir_names = set()
855
# Directories that have been read
856
processed_dirs = set()
857
# Get the ball rolling with the first bisect for all entries.
858
newly_found = self._bisect(paths)
861
# Directories that need to be read
863
paths_to_search = set()
864
for entry_list in newly_found.itervalues():
865
for dir_name_id, trees_info in entry_list:
866
found[dir_name_id] = trees_info
867
found_dir_names.add(dir_name_id[:2])
869
for tree_info in trees_info:
870
minikind = tree_info[0]
873
# We already processed this one as a directory,
874
# we don't need to do the extra work again.
876
subdir, name, file_id = dir_name_id
877
path = osutils.pathjoin(subdir, name)
879
if path not in processed_dirs:
880
pending_dirs.add(path)
881
elif minikind == 'r':
882
# Rename, we need to directly search the target
883
# which is contained in the fingerprint column
884
dir_name = osutils.split(tree_info[1])
885
if dir_name[0] in pending_dirs:
886
# This entry will be found in the dir search
888
if dir_name not in found_dir_names:
889
paths_to_search.add(tree_info[1])
890
# Now we have a list of paths to look for directly, and
891
# directory blocks that need to be read.
892
# newly_found is mixing the keys between (dir, name) and path
893
# entries, but that is okay, because we only really care about the
895
newly_found = self._bisect(sorted(paths_to_search))
896
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
897
processed_dirs.update(pending_dirs)
900
def _discard_merge_parents(self):
901
"""Discard any parents trees beyond the first.
903
Note that if this fails the dirstate is corrupted.
905
After this function returns the dirstate contains 2 trees, neither of
908
self._read_header_if_needed()
909
parents = self.get_parent_ids()
912
# only require all dirblocks if we are doing a full-pass removal.
913
self._read_dirblocks_if_needed()
914
dead_patterns = set([('a', 'r'), ('a', 'a'), ('r', 'r'), ('r', 'a')])
915
def iter_entries_removable():
916
for block in self._dirblocks:
917
deleted_positions = []
918
for pos, entry in enumerate(block[1]):
920
if (entry[1][0][0], entry[1][1][0]) in dead_patterns:
921
deleted_positions.append(pos)
922
if deleted_positions:
923
if len(deleted_positions) == len(block[1]):
926
for pos in reversed(deleted_positions):
928
# if the first parent is a ghost:
929
if parents[0] in self.get_ghosts():
930
empty_parent = [DirState.NULL_PARENT_DETAILS]
931
for entry in iter_entries_removable():
932
entry[1][1:] = empty_parent
934
for entry in iter_entries_removable():
938
self._parents = [parents[0]]
939
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
940
self._header_state = DirState.IN_MEMORY_MODIFIED
942
def _empty_parent_info(self):
943
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
946
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
947
"""Ensure a block for dirname exists.
949
This function exists to let callers which know that there is a
950
directory dirname ensure that the block for it exists. This block can
951
fail to exist because of demand loading, or because a directory had no
952
children. In either case it is not an error. It is however an error to
953
call this if there is no parent entry for the directory, and thus the
954
function requires the coordinates of such an entry to be provided.
956
The root row is special cased and can be indicated with a parent block
959
:param parent_block_index: The index of the block in which dirname's row
961
:param parent_row_index: The index in the parent block where the row
963
:param dirname: The utf8 dirname to ensure there is a block for.
964
:return: The index for the block.
966
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
967
# This is the signature of the root row, and the
968
# contents-of-root row is always index 1
970
# the basename of the directory must be the end of its full name.
971
if not (parent_block_index == -1 and
972
parent_block_index == -1 and dirname == ''):
973
if not dirname.endswith(
974
self._dirblocks[parent_block_index][1][parent_row_index][0][1]):
975
raise AssertionError("bad dirname %r" % dirname)
976
block_index, present = self._find_block_index_from_key((dirname, '', ''))
978
## In future, when doing partial parsing, this should load and
979
# populate the entire block.
980
self._dirblocks.insert(block_index, (dirname, []))
983
def _entries_to_current_state(self, new_entries):
984
"""Load new_entries into self.dirblocks.
986
Process new_entries into the current state object, making them the active
987
state. The entries are grouped together by directory to form dirblocks.
989
:param new_entries: A sorted list of entries. This function does not sort
990
to prevent unneeded overhead when callers have a sorted list already.
993
if new_entries[0][0][0:2] != ('', ''):
994
raise AssertionError(
995
"Missing root row %r" % (new_entries[0][0],))
996
# The two blocks here are deliberate: the root block and the
997
# contents-of-root block.
998
self._dirblocks = [('', []), ('', [])]
999
current_block = self._dirblocks[0][1]
1000
current_dirname = ''
1002
append_entry = current_block.append
1003
for entry in new_entries:
1004
if entry[0][0] != current_dirname:
1005
# new block - different dirname
1007
current_dirname = entry[0][0]
1008
self._dirblocks.append((current_dirname, current_block))
1009
append_entry = current_block.append
1010
# append the entry to the current block
1012
self._split_root_dirblock_into_contents()
1014
def _split_root_dirblock_into_contents(self):
1015
"""Split the root dirblocks into root and contents-of-root.
1017
After parsing by path, we end up with root entries and contents-of-root
1018
entries in the same block. This loop splits them out again.
1020
# The above loop leaves the "root block" entries mixed with the
1021
# "contents-of-root block". But we don't want an if check on
1022
# all entries, so instead we just fix it up here.
1023
if self._dirblocks[1] != ('', []):
1024
raise ValueError("bad dirblock start %r" % (self._dirblocks[1],))
1026
contents_of_root_block = []
1027
for entry in self._dirblocks[0][1]:
1028
if not entry[0][1]: # This is a root entry
1029
root_block.append(entry)
1031
contents_of_root_block.append(entry)
1032
self._dirblocks[0] = ('', root_block)
1033
self._dirblocks[1] = ('', contents_of_root_block)
1035
def _entry_to_line(self, entry):
1036
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
1038
:param entry: An entry_tuple as defined in the module docstring.
1040
entire_entry = list(entry[0])
1041
for tree_number, tree_data in enumerate(entry[1]):
1042
# (minikind, fingerprint, size, executable, tree_specific_string)
1043
entire_entry.extend(tree_data)
1044
# 3 for the key, 5 for the fields per tree.
1045
tree_offset = 3 + tree_number * 5
1047
entire_entry[tree_offset + 0] = tree_data[0]
1049
entire_entry[tree_offset + 2] = str(tree_data[2])
1051
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
1052
return '\0'.join(entire_entry)
1054
def _fields_per_entry(self):
1055
"""How many null separated fields should be in each entry row.
1057
Each line now has an extra '\n' field which is not used
1058
so we just skip over it
1060
3 fields for the key
1061
+ number of fields per tree_data (5) * tree count
1064
tree_count = 1 + self._num_present_parents()
1065
return 3 + 5 * tree_count + 1
1067
def _find_block(self, key, add_if_missing=False):
1068
"""Return the block that key should be present in.
1070
:param key: A dirstate entry key.
1071
:return: The block tuple.
1073
block_index, present = self._find_block_index_from_key(key)
1075
if not add_if_missing:
1076
# check to see if key is versioned itself - we might want to
1077
# add it anyway, because dirs with no entries dont get a
1078
# dirblock at parse time.
1079
# This is an uncommon branch to take: most dirs have children,
1080
# and most code works with versioned paths.
1081
parent_base, parent_name = osutils.split(key[0])
1082
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
1083
# some parent path has not been added - its an error to add
1085
raise errors.NotVersionedError(key[0:2], str(self))
1086
self._dirblocks.insert(block_index, (key[0], []))
1087
return self._dirblocks[block_index]
1089
def _find_block_index_from_key(self, key):
1090
"""Find the dirblock index for a key.
1092
:return: The block index, True if the block for the key is present.
1094
if key[0:2] == ('', ''):
1097
if (self._last_block_index is not None and
1098
self._dirblocks[self._last_block_index][0] == key[0]):
1099
return self._last_block_index, True
1102
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
1103
cache=self._split_path_cache)
1104
# _right returns one-past-where-key is so we have to subtract
1105
# one to use it. we use _right here because there are two
1106
# '' blocks - the root, and the contents of root
1107
# we always have a minimum of 2 in self._dirblocks: root and
1108
# root-contents, and for '', we get 2 back, so this is
1109
# simple and correct:
1110
present = (block_index < len(self._dirblocks) and
1111
self._dirblocks[block_index][0] == key[0])
1112
self._last_block_index = block_index
1113
# Reset the entry index cache to the beginning of the block.
1114
self._last_entry_index = -1
1115
return block_index, present
1117
def _find_entry_index(self, key, block):
1118
"""Find the entry index for a key in a block.
1120
:return: The entry index, True if the entry for the key is present.
1122
len_block = len(block)
1124
if self._last_entry_index is not None:
1126
entry_index = self._last_entry_index + 1
1127
# A hit is when the key is after the last slot, and before or
1128
# equal to the next slot.
1129
if ((entry_index > 0 and block[entry_index - 1][0] < key) and
1130
key <= block[entry_index][0]):
1131
self._last_entry_index = entry_index
1132
present = (block[entry_index][0] == key)
1133
return entry_index, present
1136
entry_index = bisect.bisect_left(block, (key, []))
1137
present = (entry_index < len_block and
1138
block[entry_index][0] == key)
1139
self._last_entry_index = entry_index
1140
return entry_index, present
1143
def from_tree(tree, dir_state_filename):
1144
"""Create a dirstate from a bzr Tree.
1146
:param tree: The tree which should provide parent information and
1148
:return: a DirState object which is currently locked for writing.
1149
(it was locked by DirState.initialize)
1151
result = DirState.initialize(dir_state_filename)
1155
parent_ids = tree.get_parent_ids()
1156
num_parents = len(parent_ids)
1158
for parent_id in parent_ids:
1159
parent_tree = tree.branch.repository.revision_tree(parent_id)
1160
parent_trees.append((parent_id, parent_tree))
1161
parent_tree.lock_read()
1162
result.set_parent_trees(parent_trees, [])
1163
result.set_state_from_inventory(tree.inventory)
1165
for revid, parent_tree in parent_trees:
1166
parent_tree.unlock()
1169
# The caller won't have a chance to unlock this, so make sure we
1175
def update_by_delta(self, delta):
1176
"""Apply an inventory delta to the dirstate for tree 0
1178
:param delta: An inventory delta. See Inventory.apply_delta for
1181
self._read_dirblocks_if_needed()
1184
for old_path, new_path, file_id, inv_entry in sorted(delta, reverse=True):
1185
if (file_id in insertions) or (file_id in removals):
1186
raise AssertionError("repeated file id in delta %r" % (file_id,))
1187
if old_path is not None:
1188
old_path = old_path.encode('utf-8')
1189
removals[file_id] = old_path
1190
if new_path is not None:
1191
new_path = new_path.encode('utf-8')
1192
dirname, basename = osutils.split(new_path)
1193
key = (dirname, basename, file_id)
1194
minikind = DirState._kind_to_minikind[inv_entry.kind]
1196
fingerprint = inv_entry.reference_revision
1199
insertions[file_id] = (key, minikind, inv_entry.executable,
1200
fingerprint, new_path)
1201
if None not in (old_path, new_path):
1202
for child in self._iter_child_entries(0, old_path):
1203
if child[0][2] in insertions or child[0][2] in removals:
1205
child_dirname = child[0][0]
1206
child_basename = child[0][1]
1207
minikind = child[1][0][0]
1208
fingerprint = child[1][0][4]
1209
executable = child[1][0][3]
1210
old_child_path = osutils.pathjoin(child[0][0],
1212
removals[child[0][2]] = old_child_path
1213
child_suffix = child_dirname[len(old_path):]
1214
new_child_dirname = (new_path + child_suffix)
1215
key = (new_child_dirname, child_basename, child[0][2])
1216
new_child_path = os.path.join(new_child_dirname,
1218
insertions[child[0][2]] = (key, minikind, executable,
1219
fingerprint, new_child_path)
1220
self._apply_removals(removals.values())
1221
self._apply_insertions(insertions.values())
1223
def _apply_removals(self, removals):
1224
for path in sorted(removals, reverse=True):
1225
dirname, basename = osutils.split(path)
1226
block_i, entry_i, d_present, f_present = \
1227
self._get_block_entry_index(dirname, basename, 0)
1228
entry = self._dirblocks[block_i][1][entry_i]
1229
self._make_absent(entry)
1231
def _apply_insertions(self, adds):
1232
for key, minikind, executable, fingerprint, path_utf8 in sorted(adds):
1233
self.update_minimal(key, minikind, executable, fingerprint,
1234
path_utf8=path_utf8)
1236
def update_basis_by_delta(self, delta, new_revid):
1237
"""Update the parents of this tree after a commit.
1239
This gives the tree one parent, with revision id new_revid. The
1240
inventory delta is applied to the current basis tree to generate the
1241
inventory for the parent new_revid, and all other parent trees are
1244
Note that an exception during the operation of this method will leave
1245
the dirstate in a corrupt state where it should not be saved.
1247
Finally, we expect all changes to be synchronising the basis tree with
1250
:param new_revid: The new revision id for the trees parent.
1251
:param delta: An inventory delta (see apply_inventory_delta) describing
1252
the changes from the current left most parent revision to new_revid.
1254
self._read_dirblocks_if_needed()
1255
self._discard_merge_parents()
1256
if self._ghosts != []:
1257
raise NotImplementedError(self.update_basis_by_delta)
1258
if len(self._parents) == 0:
1259
# setup a blank tree, the most simple way.
1260
empty_parent = DirState.NULL_PARENT_DETAILS
1261
for entry in self._iter_entries():
1262
entry[1].append(empty_parent)
1263
self._parents.append(new_revid)
1265
self._parents[0] = new_revid
1267
delta = sorted(delta, reverse=True)
1271
# The paths this function accepts are unicode and must be encoded as we
1273
encode = cache_utf8.encode
1274
inv_to_entry = self._inv_entry_to_details
1275
# delta is now (deletes, changes), (adds) in reverse lexographical
1277
# deletes in reverse lexographic order are safe to process in situ.
1278
# renames are not, as a rename from any path could go to a path
1279
# lexographically lower, so we transform renames into delete, add pairs,
1280
# expanding them recursively as needed.
1281
# At the same time, to reduce interface friction we convert the input
1282
# inventory entries to dirstate.
1283
root_only = ('', '')
1284
for old_path, new_path, file_id, inv_entry in delta:
1285
if old_path is None:
1286
adds.append((None, encode(new_path), file_id,
1287
inv_to_entry(inv_entry), True))
1288
elif new_path is None:
1289
deletes.append((encode(old_path), None, file_id, None, True))
1290
elif (old_path, new_path) != root_only:
1292
# Because renames must preserve their children we must have
1293
# processed all relocations and removes before hand. The sort
1294
# order ensures we've examined the child paths, but we also
1295
# have to execute the removals, or the split to an add/delete
1296
# pair will result in the deleted item being reinserted, or
1297
# renamed items being reinserted twice - and possibly at the
1298
# wrong place. Splitting into a delete/add pair also simplifies
1299
# the handling of entries with ('f', ...), ('r' ...) because
1300
# the target of the 'r' is old_path here, and we add that to
1301
# deletes, meaning that the add handler does not need to check
1302
# for 'r' items on every pass.
1303
self._update_basis_apply_deletes(deletes)
1305
new_path_utf8 = encode(new_path)
1306
# Split into an add/delete pair recursively.
1307
adds.append((None, new_path_utf8, file_id,
1308
inv_to_entry(inv_entry), False))
1309
# Expunge deletes that we've seen so that deleted/renamed
1310
# children of a rename directory are handled correctly.
1311
new_deletes = reversed(list(self._iter_child_entries(1,
1313
# Remove the current contents of the tree at orig_path, and
1314
# reinsert at the correct new path.
1315
for entry in new_deletes:
1317
source_path = entry[0][0] + '/' + entry[0][1]
1319
source_path = entry[0][1]
1321
target_path = new_path_utf8 + source_path[len(old_path):]
1324
raise AssertionError("cannot rename directory to"
1326
target_path = source_path[len(old_path) + 1:]
1327
adds.append((None, target_path, entry[0][2], entry[1][1], False))
1329
(source_path, target_path, entry[0][2], None, False))
1331
(encode(old_path), new_path, file_id, None, False))
1333
# changes to just the root should not require remove/insertion
1335
changes.append((encode(old_path), encode(new_path), file_id,
1336
inv_to_entry(inv_entry)))
1338
# Finish expunging deletes/first half of renames.
1339
self._update_basis_apply_deletes(deletes)
1340
# Reinstate second half of renames and new paths.
1341
self._update_basis_apply_adds(adds)
1342
# Apply in-situ changes.
1343
self._update_basis_apply_changes(changes)
1345
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1346
self._header_state = DirState.IN_MEMORY_MODIFIED
1347
self._id_index = None
1350
def _update_basis_apply_adds(self, adds):
1351
"""Apply a sequence of adds to tree 1 during update_basis_by_delta.
1353
They may be adds, or renames that have been split into add/delete
1356
:param adds: A sequence of adds. Each add is a tuple:
1357
(None, new_path_utf8, file_id, (entry_details), real_add). real_add
1358
is False when the add is the second half of a remove-and-reinsert
1359
pair created to handle renames and deletes.
1361
# Adds are accumulated partly from renames, so can be in any input
1364
# adds is now in lexographic order, which places all parents before
1365
# their children, so we can process it linearly.
1367
for old_path, new_path, file_id, new_details, real_add in adds:
1368
# the entry for this file_id must be in tree 0.
1369
entry = self._get_entry(0, file_id, new_path)
1370
if entry[0] is None or entry[0][2] != file_id:
1371
self._changes_aborted = True
1372
raise errors.InconsistentDelta(new_path, file_id,
1373
'working tree does not contain new entry')
1374
if real_add and entry[1][1][0] not in absent:
1375
self._changes_aborted = True
1376
raise errors.InconsistentDelta(new_path, file_id,
1377
'The entry was considered to be a genuinely new record,'
1378
' but there was already an old record for it.')
1379
# We don't need to update the target of an 'r' because the handling
1380
# of renames turns all 'r' situations into a delete at the original
1382
entry[1][1] = new_details
1384
def _update_basis_apply_changes(self, changes):
1385
"""Apply a sequence of changes to tree 1 during update_basis_by_delta.
1387
:param adds: A sequence of changes. Each change is a tuple:
1388
(path_utf8, path_utf8, file_id, (entry_details))
1391
for old_path, new_path, file_id, new_details in changes:
1392
# the entry for this file_id must be in tree 0.
1393
entry = self._get_entry(0, file_id, new_path)
1394
if entry[0] is None or entry[0][2] != file_id:
1395
self._changes_aborted = True
1396
raise errors.InconsistentDelta(new_path, file_id,
1397
'working tree does not contain new entry')
1398
if (entry[1][0][0] in absent or
1399
entry[1][1][0] in absent):
1400
self._changes_aborted = True
1401
raise errors.InconsistentDelta(new_path, file_id,
1402
'changed considered absent')
1403
entry[1][1] = new_details
1405
def _update_basis_apply_deletes(self, deletes):
1406
"""Apply a sequence of deletes to tree 1 during update_basis_by_delta.
1408
They may be deletes, or renames that have been split into add/delete
1411
:param deletes: A sequence of deletes. Each delete is a tuple:
1412
(old_path_utf8, new_path_utf8, file_id, None, real_delete).
1413
real_delete is True when the desired outcome is an actual deletion
1414
rather than the rename handling logic temporarily deleting a path
1415
during the replacement of a parent.
1417
null = DirState.NULL_PARENT_DETAILS
1418
for old_path, new_path, file_id, _, real_delete in deletes:
1419
if real_delete != (new_path is None):
1420
raise AssertionError("bad delete delta")
1421
# the entry for this file_id must be in tree 1.
1422
dirname, basename = osutils.split(old_path)
1423
block_index, entry_index, dir_present, file_present = \
1424
self._get_block_entry_index(dirname, basename, 1)
1425
if not file_present:
1426
self._changes_aborted = True
1427
raise errors.InconsistentDelta(old_path, file_id,
1428
'basis tree does not contain removed entry')
1429
entry = self._dirblocks[block_index][1][entry_index]
1430
if entry[0][2] != file_id:
1431
self._changes_aborted = True
1432
raise errors.InconsistentDelta(old_path, file_id,
1433
'mismatched file_id in tree 1')
1435
if entry[1][0][0] != 'a':
1436
self._changes_aborted = True
1437
raise errors.InconsistentDelta(old_path, file_id,
1438
'This was marked as a real delete, but the WT state'
1439
' claims that it still exists and is versioned.')
1440
del self._dirblocks[block_index][1][entry_index]
1442
if entry[1][0][0] == 'a':
1443
self._changes_aborted = True
1444
raise errors.InconsistentDelta(old_path, file_id,
1445
'The entry was considered a rename, but the source path'
1446
' is marked as absent.')
1447
# For whatever reason, we were asked to rename an entry
1448
# that was originally marked as deleted. This could be
1449
# because we are renaming the parent directory, and the WT
1450
# current state has the file marked as deleted.
1451
elif entry[1][0][0] == 'r':
1452
# implement the rename
1453
del self._dirblocks[block_index][1][entry_index]
1455
# it is being resurrected here, so blank it out temporarily.
1456
self._dirblocks[block_index][1][entry_index][1][1] = null
1458
def update_entry(self, entry, abspath, stat_value,
1459
_stat_to_minikind=_stat_to_minikind,
1460
_pack_stat=pack_stat):
1461
"""Update the entry based on what is actually on disk.
1463
:param entry: This is the dirblock entry for the file in question.
1464
:param abspath: The path on disk for this file.
1465
:param stat_value: (optional) if we already have done a stat on the
1467
:return: The sha1 hexdigest of the file (40 bytes) or link target of a
1471
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
1475
packed_stat = _pack_stat(stat_value)
1476
(saved_minikind, saved_link_or_sha1, saved_file_size,
1477
saved_executable, saved_packed_stat) = entry[1][0]
1479
if (minikind == saved_minikind
1480
and packed_stat == saved_packed_stat):
1481
# The stat hasn't changed since we saved, so we can re-use the
1486
# size should also be in packed_stat
1487
if saved_file_size == stat_value.st_size:
1488
return saved_link_or_sha1
1490
# If we have gotten this far, that means that we need to actually
1491
# process this entry.
1494
link_or_sha1 = self._sha1_file(abspath)
1495
executable = self._is_executable(stat_value.st_mode,
1497
if self._cutoff_time is None:
1498
self._sha_cutoff_time()
1499
if (stat_value.st_mtime < self._cutoff_time
1500
and stat_value.st_ctime < self._cutoff_time):
1501
entry[1][0] = ('f', link_or_sha1, stat_value.st_size,
1502
executable, packed_stat)
1504
entry[1][0] = ('f', '', stat_value.st_size,
1505
executable, DirState.NULLSTAT)
1506
elif minikind == 'd':
1508
entry[1][0] = ('d', '', 0, False, packed_stat)
1509
if saved_minikind != 'd':
1510
# This changed from something into a directory. Make sure we
1511
# have a directory block for it. This doesn't happen very
1512
# often, so this doesn't have to be super fast.
1513
block_index, entry_index, dir_present, file_present = \
1514
self._get_block_entry_index(entry[0][0], entry[0][1], 0)
1515
self._ensure_block(block_index, entry_index,
1516
osutils.pathjoin(entry[0][0], entry[0][1]))
1517
elif minikind == 'l':
1518
link_or_sha1 = self._read_link(abspath, saved_link_or_sha1)
1519
if self._cutoff_time is None:
1520
self._sha_cutoff_time()
1521
if (stat_value.st_mtime < self._cutoff_time
1522
and stat_value.st_ctime < self._cutoff_time):
1523
entry[1][0] = ('l', link_or_sha1, stat_value.st_size,
1526
entry[1][0] = ('l', '', stat_value.st_size,
1527
False, DirState.NULLSTAT)
1528
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1531
def _sha_cutoff_time(self):
1532
"""Return cutoff time.
1534
Files modified more recently than this time are at risk of being
1535
undetectably modified and so can't be cached.
1537
# Cache the cutoff time as long as we hold a lock.
1538
# time.time() isn't super expensive (approx 3.38us), but
1539
# when you call it 50,000 times it adds up.
1540
# For comparison, os.lstat() costs 7.2us if it is hot.
1541
self._cutoff_time = int(time.time()) - 3
1542
return self._cutoff_time
1544
def _lstat(self, abspath, entry):
1545
"""Return the os.lstat value for this path."""
1546
return os.lstat(abspath)
1548
def _sha1_file_and_mutter(self, abspath):
1549
# when -Dhashcache is turned on, this is monkey-patched in to log
1551
trace.mutter("dirstate sha1 " + abspath)
1552
return osutils.sha_file_by_name(abspath)
1554
def _is_executable(self, mode, old_executable):
1555
"""Is this file executable?"""
1556
return bool(S_IEXEC & mode)
1558
def _is_executable_win32(self, mode, old_executable):
1559
"""On win32 the executable bit is stored in the dirstate."""
1560
return old_executable
1562
if sys.platform == 'win32':
1563
_is_executable = _is_executable_win32
1565
def _read_link(self, abspath, old_link):
1566
"""Read the target of a symlink"""
1567
# TODO: jam 200700301 On Win32, this could just return the value
1568
# already in memory. However, this really needs to be done at a
1569
# higher level, because there either won't be anything on disk,
1570
# or the thing on disk will be a file.
1571
return os.readlink(abspath)
1573
def get_ghosts(self):
1574
"""Return a list of the parent tree revision ids that are ghosts."""
1575
self._read_header_if_needed()
1578
def get_lines(self):
1579
"""Serialise the entire dirstate to a sequence of lines."""
1580
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1581
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1582
# read whats on disk.
1583
self._state_file.seek(0)
1584
return self._state_file.readlines()
1586
lines.append(self._get_parents_line(self.get_parent_ids()))
1587
lines.append(self._get_ghosts_line(self._ghosts))
1588
# append the root line which is special cased
1589
lines.extend(map(self._entry_to_line, self._iter_entries()))
1590
return self._get_output_lines(lines)
1592
def _get_ghosts_line(self, ghost_ids):
1593
"""Create a line for the state file for ghost information."""
1594
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1596
def _get_parents_line(self, parent_ids):
1597
"""Create a line for the state file for parents information."""
1598
return '\0'.join([str(len(parent_ids))] + parent_ids)
1600
def _get_fields_to_entry(self):
1601
"""Get a function which converts entry fields into a entry record.
1603
This handles size and executable, as well as parent records.
1605
:return: A function which takes a list of fields, and returns an
1606
appropriate record for storing in memory.
1608
# This is intentionally unrolled for performance
1609
num_present_parents = self._num_present_parents()
1610
if num_present_parents == 0:
1611
def fields_to_entry_0_parents(fields, _int=int):
1612
path_name_file_id_key = (fields[0], fields[1], fields[2])
1613
return (path_name_file_id_key, [
1615
fields[3], # minikind
1616
fields[4], # fingerprint
1617
_int(fields[5]), # size
1618
fields[6] == 'y', # executable
1619
fields[7], # packed_stat or revision_id
1621
return fields_to_entry_0_parents
1622
elif num_present_parents == 1:
1623
def fields_to_entry_1_parent(fields, _int=int):
1624
path_name_file_id_key = (fields[0], fields[1], fields[2])
1625
return (path_name_file_id_key, [
1627
fields[3], # minikind
1628
fields[4], # fingerprint
1629
_int(fields[5]), # size
1630
fields[6] == 'y', # executable
1631
fields[7], # packed_stat or revision_id
1634
fields[8], # minikind
1635
fields[9], # fingerprint
1636
_int(fields[10]), # size
1637
fields[11] == 'y', # executable
1638
fields[12], # packed_stat or revision_id
1641
return fields_to_entry_1_parent
1642
elif num_present_parents == 2:
1643
def fields_to_entry_2_parents(fields, _int=int):
1644
path_name_file_id_key = (fields[0], fields[1], fields[2])
1645
return (path_name_file_id_key, [
1647
fields[3], # minikind
1648
fields[4], # fingerprint
1649
_int(fields[5]), # size
1650
fields[6] == 'y', # executable
1651
fields[7], # packed_stat or revision_id
1654
fields[8], # minikind
1655
fields[9], # fingerprint
1656
_int(fields[10]), # size
1657
fields[11] == 'y', # executable
1658
fields[12], # packed_stat or revision_id
1661
fields[13], # minikind
1662
fields[14], # fingerprint
1663
_int(fields[15]), # size
1664
fields[16] == 'y', # executable
1665
fields[17], # packed_stat or revision_id
1668
return fields_to_entry_2_parents
1670
def fields_to_entry_n_parents(fields, _int=int):
1671
path_name_file_id_key = (fields[0], fields[1], fields[2])
1672
trees = [(fields[cur], # minikind
1673
fields[cur+1], # fingerprint
1674
_int(fields[cur+2]), # size
1675
fields[cur+3] == 'y', # executable
1676
fields[cur+4], # stat or revision_id
1677
) for cur in xrange(3, len(fields)-1, 5)]
1678
return path_name_file_id_key, trees
1679
return fields_to_entry_n_parents
1681
def get_parent_ids(self):
1682
"""Return a list of the parent tree ids for the directory state."""
1683
self._read_header_if_needed()
1684
return list(self._parents)
1686
def _get_block_entry_index(self, dirname, basename, tree_index):
1687
"""Get the coordinates for a path in the state structure.
1689
:param dirname: The utf8 dirname to lookup.
1690
:param basename: The utf8 basename to lookup.
1691
:param tree_index: The index of the tree for which this lookup should
1693
:return: A tuple describing where the path is located, or should be
1694
inserted. The tuple contains four fields: the block index, the row
1695
index, the directory is present (boolean), the entire path is
1696
present (boolean). There is no guarantee that either
1697
coordinate is currently reachable unless the found field for it is
1698
True. For instance, a directory not present in the searched tree
1699
may be returned with a value one greater than the current highest
1700
block offset. The directory present field will always be True when
1701
the path present field is True. The directory present field does
1702
NOT indicate that the directory is present in the searched tree,
1703
rather it indicates that there are at least some files in some
1706
self._read_dirblocks_if_needed()
1707
key = dirname, basename, ''
1708
block_index, present = self._find_block_index_from_key(key)
1710
# no such directory - return the dir index and 0 for the row.
1711
return block_index, 0, False, False
1712
block = self._dirblocks[block_index][1] # access the entries only
1713
entry_index, present = self._find_entry_index(key, block)
1714
# linear search through entries at this path to find the one
1716
while entry_index < len(block) and block[entry_index][0][1] == basename:
1717
if block[entry_index][1][tree_index][0] not in 'ar':
1718
# neither absent or relocated
1719
return block_index, entry_index, True, True
1721
return block_index, entry_index, True, False
1723
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None):
1724
"""Get the dirstate entry for path in tree tree_index.
1726
If either file_id or path is supplied, it is used as the key to lookup.
1727
If both are supplied, the fastest lookup is used, and an error is
1728
raised if they do not both point at the same row.
1730
:param tree_index: The index of the tree we wish to locate this path
1731
in. If the path is present in that tree, the entry containing its
1732
details is returned, otherwise (None, None) is returned
1733
0 is the working tree, higher indexes are successive parent
1735
:param fileid_utf8: A utf8 file_id to look up.
1736
:param path_utf8: An utf8 path to be looked up.
1737
:return: The dirstate entry tuple for path, or (None, None)
1739
self._read_dirblocks_if_needed()
1740
if path_utf8 is not None:
1741
if not isinstance(path_utf8, str):
1742
raise AssertionError('path_utf8 is not a str: %s %s'
1743
% (type(path_utf8), path_utf8))
1744
# path lookups are faster
1745
dirname, basename = osutils.split(path_utf8)
1746
block_index, entry_index, dir_present, file_present = \
1747
self._get_block_entry_index(dirname, basename, tree_index)
1748
if not file_present:
1750
entry = self._dirblocks[block_index][1][entry_index]
1751
if not (entry[0][2] and entry[1][tree_index][0] not in ('a', 'r')):
1752
raise AssertionError('unversioned entry?')
1754
if entry[0][2] != fileid_utf8:
1755
self._changes_aborted = True
1756
raise errors.BzrError('integrity error ? : mismatching'
1757
' tree_index, file_id and path')
1760
possible_keys = self._get_id_index().get(fileid_utf8, None)
1761
if not possible_keys:
1763
for key in possible_keys:
1764
block_index, present = \
1765
self._find_block_index_from_key(key)
1766
# strange, probably indicates an out of date
1767
# id index - for now, allow this.
1770
# WARNING: DO not change this code to use _get_block_entry_index
1771
# as that function is not suitable: it does not use the key
1772
# to lookup, and thus the wrong coordinates are returned.
1773
block = self._dirblocks[block_index][1]
1774
entry_index, present = self._find_entry_index(key, block)
1776
entry = self._dirblocks[block_index][1][entry_index]
1777
if entry[1][tree_index][0] in 'fdlt':
1778
# this is the result we are looking for: the
1779
# real home of this file_id in this tree.
1781
if entry[1][tree_index][0] == 'a':
1782
# there is no home for this entry in this tree
1784
if entry[1][tree_index][0] != 'r':
1785
raise AssertionError(
1786
"entry %r has invalid minikind %r for tree %r" \
1788
entry[1][tree_index][0],
1790
real_path = entry[1][tree_index][1]
1791
return self._get_entry(tree_index, fileid_utf8=fileid_utf8,
1792
path_utf8=real_path)
1796
def initialize(cls, path):
1797
"""Create a new dirstate on path.
1799
The new dirstate will be an empty tree - that is it has no parents,
1800
and only a root node - which has id ROOT_ID.
1802
:param path: The name of the file for the dirstate.
1803
:return: A write-locked DirState object.
1805
# This constructs a new DirState object on a path, sets the _state_file
1806
# to a new empty file for that path. It then calls _set_data() with our
1807
# stock empty dirstate information - a root with ROOT_ID, no children,
1808
# and no parents. Finally it calls save() to ensure that this data will
1811
# root dir and root dir contents with no children.
1812
empty_tree_dirblocks = [('', []), ('', [])]
1813
# a new root directory, with a NULLSTAT.
1814
empty_tree_dirblocks[0][1].append(
1815
(('', '', inventory.ROOT_ID), [
1816
('d', '', 0, False, DirState.NULLSTAT),
1820
result._set_data([], empty_tree_dirblocks)
1827
def _inv_entry_to_details(self, inv_entry):
1828
"""Convert an inventory entry (from a revision tree) to state details.
1830
:param inv_entry: An inventory entry whose sha1 and link targets can be
1831
relied upon, and which has a revision set.
1832
:return: A details tuple - the details for a single tree at a path +
1835
kind = inv_entry.kind
1836
minikind = DirState._kind_to_minikind[kind]
1837
tree_data = inv_entry.revision
1838
if kind == 'directory':
1842
elif kind == 'symlink':
1843
fingerprint = inv_entry.symlink_target or ''
1846
elif kind == 'file':
1847
fingerprint = inv_entry.text_sha1 or ''
1848
size = inv_entry.text_size or 0
1849
executable = inv_entry.executable
1850
elif kind == 'tree-reference':
1851
fingerprint = inv_entry.reference_revision or ''
1855
raise Exception("can't pack %s" % inv_entry)
1856
return (minikind, fingerprint, size, executable, tree_data)
1858
def _iter_child_entries(self, tree_index, path_utf8):
1859
"""Iterate over all the entries that are children of path_utf.
1861
This only returns entries that are present (not in 'a', 'r') in
1862
tree_index. tree_index data is not refreshed, so if tree 0 is used,
1863
results may differ from that obtained if paths were statted to
1864
determine what ones were directories.
1866
Asking for the children of a non-directory will return an empty
1870
next_pending_dirs = [path_utf8]
1872
while next_pending_dirs:
1873
pending_dirs = next_pending_dirs
1874
next_pending_dirs = []
1875
for path in pending_dirs:
1876
block_index, present = self._find_block_index_from_key(
1878
if block_index == 0:
1880
if len(self._dirblocks) == 1:
1881
# asked for the children of the root with no other
1885
# children of a non-directory asked for.
1887
block = self._dirblocks[block_index]
1888
for entry in block[1]:
1889
kind = entry[1][tree_index][0]
1890
if kind not in absent:
1894
path = entry[0][0] + '/' + entry[0][1]
1897
next_pending_dirs.append(path)
1899
def _iter_entries(self):
1900
"""Iterate over all the entries in the dirstate.
1902
Each yelt item is an entry in the standard format described in the
1903
docstring of bzrlib.dirstate.
1905
self._read_dirblocks_if_needed()
1906
for directory in self._dirblocks:
1907
for entry in directory[1]:
1910
def _get_id_index(self):
1911
"""Get an id index of self._dirblocks."""
1912
if self._id_index is None:
1914
for key, tree_details in self._iter_entries():
1915
id_index.setdefault(key[2], set()).add(key)
1916
self._id_index = id_index
1917
return self._id_index
1919
def _get_output_lines(self, lines):
1920
"""Format lines for final output.
1922
:param lines: A sequence of lines containing the parents list and the
1925
output_lines = [DirState.HEADER_FORMAT_3]
1926
lines.append('') # a final newline
1927
inventory_text = '\0\n\0'.join(lines)
1928
output_lines.append('crc32: %s\n' % (zlib.crc32(inventory_text),))
1929
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
1930
num_entries = len(lines)-3
1931
output_lines.append('num_entries: %s\n' % (num_entries,))
1932
output_lines.append(inventory_text)
1935
def _make_deleted_row(self, fileid_utf8, parents):
1936
"""Return a deleted row for fileid_utf8."""
1937
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
1940
def _num_present_parents(self):
1941
"""The number of parent entries in each record row."""
1942
return len(self._parents) - len(self._ghosts)
1946
"""Construct a DirState on the file at path path.
1948
:return: An unlocked DirState object, associated with the given path.
1950
result = DirState(path)
1953
def _read_dirblocks_if_needed(self):
1954
"""Read in all the dirblocks from the file if they are not in memory.
1956
This populates self._dirblocks, and sets self._dirblock_state to
1957
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
1960
self._read_header_if_needed()
1961
if self._dirblock_state == DirState.NOT_IN_MEMORY:
1962
_read_dirblocks(self)
1964
def _read_header(self):
1965
"""This reads in the metadata header, and the parent ids.
1967
After reading in, the file should be positioned at the null
1968
just before the start of the first record in the file.
1970
:return: (expected crc checksum, number of entries, parent list)
1972
self._read_prelude()
1973
parent_line = self._state_file.readline()
1974
info = parent_line.split('\0')
1975
num_parents = int(info[0])
1976
self._parents = info[1:-1]
1977
ghost_line = self._state_file.readline()
1978
info = ghost_line.split('\0')
1979
num_ghosts = int(info[1])
1980
self._ghosts = info[2:-1]
1981
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1982
self._end_of_header = self._state_file.tell()
1984
def _read_header_if_needed(self):
1985
"""Read the header of the dirstate file if needed."""
1986
# inline this as it will be called a lot
1987
if not self._lock_token:
1988
raise errors.ObjectNotLocked(self)
1989
if self._header_state == DirState.NOT_IN_MEMORY:
1992
def _read_prelude(self):
1993
"""Read in the prelude header of the dirstate file.
1995
This only reads in the stuff that is not connected to the crc
1996
checksum. The position will be correct to read in the rest of
1997
the file and check the checksum after this point.
1998
The next entry in the file should be the number of parents,
1999
and their ids. Followed by a newline.
2001
header = self._state_file.readline()
2002
if header != DirState.HEADER_FORMAT_3:
2003
raise errors.BzrError(
2004
'invalid header line: %r' % (header,))
2005
crc_line = self._state_file.readline()
2006
if not crc_line.startswith('crc32: '):
2007
raise errors.BzrError('missing crc32 checksum: %r' % crc_line)
2008
self.crc_expected = int(crc_line[len('crc32: '):-1])
2009
num_entries_line = self._state_file.readline()
2010
if not num_entries_line.startswith('num_entries: '):
2011
raise errors.BzrError('missing num_entries line')
2012
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
2014
def sha1_from_stat(self, path, stat_result, _pack_stat=pack_stat):
2015
"""Find a sha1 given a stat lookup."""
2016
return self._get_packed_stat_index().get(_pack_stat(stat_result), None)
2018
def _get_packed_stat_index(self):
2019
"""Get a packed_stat index of self._dirblocks."""
2020
if self._packed_stat_index is None:
2022
for key, tree_details in self._iter_entries():
2023
if tree_details[0][0] == 'f':
2024
index[tree_details[0][4]] = tree_details[0][1]
2025
self._packed_stat_index = index
2026
return self._packed_stat_index
2029
"""Save any pending changes created during this session.
2031
We reuse the existing file, because that prevents race conditions with
2032
file creation, and use oslocks on it to prevent concurrent modification
2033
and reads - because dirstate's incremental data aggregation is not
2034
compatible with reading a modified file, and replacing a file in use by
2035
another process is impossible on Windows.
2037
A dirstate in read only mode should be smart enough though to validate
2038
that the file has not changed, and otherwise discard its cache and
2039
start over, to allow for fine grained read lock duration, so 'status'
2040
wont block 'commit' - for example.
2042
if self._changes_aborted:
2043
# Should this be a warning? For now, I'm expecting that places that
2044
# mark it inconsistent will warn, making a warning here redundant.
2045
trace.mutter('Not saving DirState because '
2046
'_changes_aborted is set.')
2048
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
2049
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
2051
grabbed_write_lock = False
2052
if self._lock_state != 'w':
2053
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
2054
# Switch over to the new lock, as the old one may be closed.
2055
# TODO: jam 20070315 We should validate the disk file has
2056
# not changed contents. Since temporary_write_lock may
2057
# not be an atomic operation.
2058
self._lock_token = new_lock
2059
self._state_file = new_lock.f
2060
if not grabbed_write_lock:
2061
# We couldn't grab a write lock, so we switch back to a read one
2064
self._state_file.seek(0)
2065
self._state_file.writelines(self.get_lines())
2066
self._state_file.truncate()
2067
self._state_file.flush()
2068
self._header_state = DirState.IN_MEMORY_UNMODIFIED
2069
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
2071
if grabbed_write_lock:
2072
self._lock_token = self._lock_token.restore_read_lock()
2073
self._state_file = self._lock_token.f
2074
# TODO: jam 20070315 We should validate the disk file has
2075
# not changed contents. Since restore_read_lock may
2076
# not be an atomic operation.
2078
def _set_data(self, parent_ids, dirblocks):
2079
"""Set the full dirstate data in memory.
2081
This is an internal function used to completely replace the objects
2082
in memory state. It puts the dirstate into state 'full-dirty'.
2084
:param parent_ids: A list of parent tree revision ids.
2085
:param dirblocks: A list containing one tuple for each directory in the
2086
tree. Each tuple contains the directory path and a list of entries
2087
found in that directory.
2089
# our memory copy is now authoritative.
2090
self._dirblocks = dirblocks
2091
self._header_state = DirState.IN_MEMORY_MODIFIED
2092
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2093
self._parents = list(parent_ids)
2094
self._id_index = None
2095
self._packed_stat_index = None
2097
def set_path_id(self, path, new_id):
2098
"""Change the id of path to new_id in the current working tree.
2100
:param path: The path inside the tree to set - '' is the root, 'foo'
2101
is the path foo in the root.
2102
:param new_id: The new id to assign to the path. This must be a utf8
2103
file id (not unicode, and not None).
2105
self._read_dirblocks_if_needed()
2107
# TODO: logic not written
2108
raise NotImplementedError(self.set_path_id)
2109
# TODO: check new id is unique
2110
entry = self._get_entry(0, path_utf8=path)
2111
if entry[0][2] == new_id:
2112
# Nothing to change.
2114
# mark the old path absent, and insert a new root path
2115
self._make_absent(entry)
2116
self.update_minimal(('', '', new_id), 'd',
2117
path_utf8='', packed_stat=entry[1][0][4])
2118
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2119
if self._id_index is not None:
2120
self._id_index.setdefault(new_id, set()).add(entry[0])
2122
def set_parent_trees(self, trees, ghosts):
2123
"""Set the parent trees for the dirstate.
2125
:param trees: A list of revision_id, tree tuples. tree must be provided
2126
even if the revision_id refers to a ghost: supply an empty tree in
2128
:param ghosts: A list of the revision_ids that are ghosts at the time
2131
# TODO: generate a list of parent indexes to preserve to save
2132
# processing specific parent trees. In the common case one tree will
2133
# be preserved - the left most parent.
2134
# TODO: if the parent tree is a dirstate, we might want to walk them
2135
# all by path in parallel for 'optimal' common-case performance.
2136
# generate new root row.
2137
self._read_dirblocks_if_needed()
2138
# TODO future sketch: Examine the existing parents to generate a change
2139
# map and then walk the new parent trees only, mapping them into the
2140
# dirstate. Walk the dirstate at the same time to remove unreferenced
2143
# sketch: loop over all entries in the dirstate, cherry picking
2144
# entries from the parent trees, if they are not ghost trees.
2145
# after we finish walking the dirstate, all entries not in the dirstate
2146
# are deletes, so we want to append them to the end as per the design
2147
# discussions. So do a set difference on ids with the parents to
2148
# get deletes, and add them to the end.
2149
# During the update process we need to answer the following questions:
2150
# - find other keys containing a fileid in order to create cross-path
2151
# links. We dont't trivially use the inventory from other trees
2152
# because this leads to either double touching, or to accessing
2154
# - find other keys containing a path
2155
# We accumulate each entry via this dictionary, including the root
2158
# we could do parallel iterators, but because file id data may be
2159
# scattered throughout, we dont save on index overhead: we have to look
2160
# at everything anyway. We can probably save cycles by reusing parent
2161
# data and doing an incremental update when adding an additional
2162
# parent, but for now the common cases are adding a new parent (merge),
2163
# and replacing completely (commit), and commit is more common: so
2164
# optimise merge later.
2166
# ---- start generation of full tree mapping data
2167
# what trees should we use?
2168
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
2169
# how many trees do we end up with
2170
parent_count = len(parent_trees)
2172
# one: the current tree
2173
for entry in self._iter_entries():
2174
# skip entries not in the current tree
2175
if entry[1][0][0] in 'ar': # absent, relocated
2177
by_path[entry[0]] = [entry[1][0]] + \
2178
[DirState.NULL_PARENT_DETAILS] * parent_count
2179
id_index[entry[0][2]] = set([entry[0]])
2181
# now the parent trees:
2182
for tree_index, tree in enumerate(parent_trees):
2183
# the index is off by one, adjust it.
2184
tree_index = tree_index + 1
2185
# when we add new locations for a fileid we need these ranges for
2186
# any fileid in this tree as we set the by_path[id] to:
2187
# already_processed_tree_details + new_details + new_location_suffix
2188
# the suffix is from tree_index+1:parent_count+1.
2189
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
2190
# now stitch in all the entries from this tree
2191
for path, entry in tree.inventory.iter_entries_by_dir():
2192
# here we process each trees details for each item in the tree.
2193
# we first update any existing entries for the id at other paths,
2194
# then we either create or update the entry for the id at the
2195
# right path, and finally we add (if needed) a mapping from
2196
# file_id to this path. We do it in this order to allow us to
2197
# avoid checking all known paths for the id when generating a
2198
# new entry at this path: by adding the id->path mapping last,
2199
# all the mappings are valid and have correct relocation
2200
# records where needed.
2201
file_id = entry.file_id
2202
path_utf8 = path.encode('utf8')
2203
dirname, basename = osutils.split(path_utf8)
2204
new_entry_key = (dirname, basename, file_id)
2205
# tree index consistency: All other paths for this id in this tree
2206
# index must point to the correct path.
2207
for entry_key in id_index.setdefault(file_id, set()):
2208
# TODO:PROFILING: It might be faster to just update
2209
# rather than checking if we need to, and then overwrite
2210
# the one we are located at.
2211
if entry_key != new_entry_key:
2212
# this file id is at a different path in one of the
2213
# other trees, so put absent pointers there
2214
# This is the vertical axis in the matrix, all pointing
2216
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
2217
# by path consistency: Insert into an existing path record (trivial), or
2218
# add a new one with relocation pointers for the other tree indexes.
2219
if new_entry_key in id_index[file_id]:
2220
# there is already an entry where this data belongs, just insert it.
2221
by_path[new_entry_key][tree_index] = \
2222
self._inv_entry_to_details(entry)
2224
# add relocated entries to the horizontal axis - this row
2225
# mapping from path,id. We need to look up the correct path
2226
# for the indexes from 0 to tree_index -1
2228
for lookup_index in xrange(tree_index):
2229
# boundary case: this is the first occurence of file_id
2230
# so there are no id_indexs, possibly take this out of
2232
if not len(id_index[file_id]):
2233
new_details.append(DirState.NULL_PARENT_DETAILS)
2235
# grab any one entry, use it to find the right path.
2236
# TODO: optimise this to reduce memory use in highly
2237
# fragmented situations by reusing the relocation
2239
a_key = iter(id_index[file_id]).next()
2240
if by_path[a_key][lookup_index][0] in ('r', 'a'):
2241
# its a pointer or missing statement, use it as is.
2242
new_details.append(by_path[a_key][lookup_index])
2244
# we have the right key, make a pointer to it.
2245
real_path = ('/'.join(a_key[0:2])).strip('/')
2246
new_details.append(('r', real_path, 0, False, ''))
2247
new_details.append(self._inv_entry_to_details(entry))
2248
new_details.extend(new_location_suffix)
2249
by_path[new_entry_key] = new_details
2250
id_index[file_id].add(new_entry_key)
2251
# --- end generation of full tree mappings
2253
# sort and output all the entries
2254
new_entries = self._sort_entries(by_path.items())
2255
self._entries_to_current_state(new_entries)
2256
self._parents = [rev_id for rev_id, tree in trees]
2257
self._ghosts = list(ghosts)
2258
self._header_state = DirState.IN_MEMORY_MODIFIED
2259
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2260
self._id_index = id_index
2262
def _sort_entries(self, entry_list):
2263
"""Given a list of entries, sort them into the right order.
2265
This is done when constructing a new dirstate from trees - normally we
2266
try to keep everything in sorted blocks all the time, but sometimes
2267
it's easier to sort after the fact.
2270
# sort by: directory parts, file name, file id
2271
return entry[0][0].split('/'), entry[0][1], entry[0][2]
2272
return sorted(entry_list, key=_key)
2274
def set_state_from_inventory(self, new_inv):
2275
"""Set new_inv as the current state.
2277
This API is called by tree transform, and will usually occur with
2278
existing parent trees.
2280
:param new_inv: The inventory object to set current state from.
2282
if 'evil' in debug.debug_flags:
2283
trace.mutter_callsite(1,
2284
"set_state_from_inventory called; please mutate the tree instead")
2285
self._read_dirblocks_if_needed()
2287
# Two iterators: current data and new data, both in dirblock order.
2288
# We zip them together, which tells about entries that are new in the
2289
# inventory, or removed in the inventory, or present in both and
2292
# You might think we could just synthesize a new dirstate directly
2293
# since we're processing it in the right order. However, we need to
2294
# also consider there may be any number of parent trees and relocation
2295
# pointers, and we don't want to duplicate that here.
2296
new_iterator = new_inv.iter_entries_by_dir()
2297
# we will be modifying the dirstate, so we need a stable iterator. In
2298
# future we might write one, for now we just clone the state into a
2299
# list - which is a shallow copy.
2300
old_iterator = iter(list(self._iter_entries()))
2301
# both must have roots so this is safe:
2302
current_new = new_iterator.next()
2303
current_old = old_iterator.next()
2304
def advance(iterator):
2306
return iterator.next()
2307
except StopIteration:
2309
while current_new or current_old:
2310
# skip entries in old that are not really there
2311
if current_old and current_old[1][0][0] in 'ar':
2312
# relocated or absent
2313
current_old = advance(old_iterator)
2316
# convert new into dirblock style
2317
new_path_utf8 = current_new[0].encode('utf8')
2318
new_dirname, new_basename = osutils.split(new_path_utf8)
2319
new_id = current_new[1].file_id
2320
new_entry_key = (new_dirname, new_basename, new_id)
2321
current_new_minikind = \
2322
DirState._kind_to_minikind[current_new[1].kind]
2323
if current_new_minikind == 't':
2324
fingerprint = current_new[1].reference_revision or ''
2326
# We normally only insert or remove records, or update
2327
# them when it has significantly changed. Then we want to
2328
# erase its fingerprint. Unaffected records should
2329
# normally not be updated at all.
2332
# for safety disable variables
2333
new_path_utf8 = new_dirname = new_basename = new_id = \
2334
new_entry_key = None
2335
# 5 cases, we dont have a value that is strictly greater than everything, so
2336
# we make both end conditions explicit
2338
# old is finished: insert current_new into the state.
2339
self.update_minimal(new_entry_key, current_new_minikind,
2340
executable=current_new[1].executable,
2341
path_utf8=new_path_utf8, fingerprint=fingerprint)
2342
current_new = advance(new_iterator)
2343
elif not current_new:
2345
self._make_absent(current_old)
2346
current_old = advance(old_iterator)
2347
elif new_entry_key == current_old[0]:
2348
# same - common case
2349
# We're looking at the same path and id in both the dirstate
2350
# and inventory, so just need to update the fields in the
2351
# dirstate from the one in the inventory.
2352
# TODO: update the record if anything significant has changed.
2353
# the minimal required trigger is if the execute bit or cached
2355
if (current_old[1][0][3] != current_new[1].executable or
2356
current_old[1][0][0] != current_new_minikind):
2357
self.update_minimal(current_old[0], current_new_minikind,
2358
executable=current_new[1].executable,
2359
path_utf8=new_path_utf8, fingerprint=fingerprint)
2360
# both sides are dealt with, move on
2361
current_old = advance(old_iterator)
2362
current_new = advance(new_iterator)
2363
elif (cmp_by_dirs(new_dirname, current_old[0][0]) < 0
2364
or (new_dirname == current_old[0][0]
2365
and new_entry_key[1:] < current_old[0][1:])):
2367
# add a entry for this and advance new
2368
self.update_minimal(new_entry_key, current_new_minikind,
2369
executable=current_new[1].executable,
2370
path_utf8=new_path_utf8, fingerprint=fingerprint)
2371
current_new = advance(new_iterator)
2373
# we've advanced past the place where the old key would be,
2374
# without seeing it in the new list. so it must be gone.
2375
self._make_absent(current_old)
2376
current_old = advance(old_iterator)
2377
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2378
self._id_index = None
2379
self._packed_stat_index = None
2381
def _make_absent(self, current_old):
2382
"""Mark current_old - an entry - as absent for tree 0.
2384
:return: True if this was the last details entry for the entry key:
2385
that is, if the underlying block has had the entry removed, thus
2386
shrinking in length.
2388
# build up paths that this id will be left at after the change is made,
2389
# so we can update their cross references in tree 0
2390
all_remaining_keys = set()
2391
# Dont check the working tree, because it's going.
2392
for details in current_old[1][1:]:
2393
if details[0] not in 'ar': # absent, relocated
2394
all_remaining_keys.add(current_old[0])
2395
elif details[0] == 'r': # relocated
2396
# record the key for the real path.
2397
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
2398
# absent rows are not present at any path.
2399
last_reference = current_old[0] not in all_remaining_keys
2401
# the current row consists entire of the current item (being marked
2402
# absent), and relocated or absent entries for the other trees:
2403
# Remove it, its meaningless.
2404
block = self._find_block(current_old[0])
2405
entry_index, present = self._find_entry_index(current_old[0], block[1])
2407
raise AssertionError('could not find entry for %s' % (current_old,))
2408
block[1].pop(entry_index)
2409
# if we have an id_index in use, remove this key from it for this id.
2410
if self._id_index is not None:
2411
self._id_index[current_old[0][2]].remove(current_old[0])
2412
# update all remaining keys for this id to record it as absent. The
2413
# existing details may either be the record we are marking as deleted
2414
# (if there were other trees with the id present at this path), or may
2416
for update_key in all_remaining_keys:
2417
update_block_index, present = \
2418
self._find_block_index_from_key(update_key)
2420
raise AssertionError('could not find block for %s' % (update_key,))
2421
update_entry_index, present = \
2422
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
2424
raise AssertionError('could not find entry for %s' % (update_key,))
2425
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
2426
# it must not be absent at the moment
2427
if update_tree_details[0][0] == 'a': # absent
2428
raise AssertionError('bad row %r' % (update_tree_details,))
2429
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
2430
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2431
return last_reference
2433
def update_minimal(self, key, minikind, executable=False, fingerprint='',
2434
packed_stat=None, size=0, path_utf8=None):
2435
"""Update an entry to the state in tree 0.
2437
This will either create a new entry at 'key' or update an existing one.
2438
It also makes sure that any other records which might mention this are
2441
:param key: (dir, name, file_id) for the new entry
2442
:param minikind: The type for the entry ('f' == 'file', 'd' ==
2444
:param executable: Should the executable bit be set?
2445
:param fingerprint: Simple fingerprint for new entry: sha1 for files,
2446
referenced revision id for subtrees, etc.
2447
:param packed_stat: Packed stat value for new entry.
2448
:param size: Size information for new entry
2449
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
2452
If packed_stat and fingerprint are not given, they're invalidated in
2455
block = self._find_block(key)[1]
2456
if packed_stat is None:
2457
packed_stat = DirState.NULLSTAT
2458
# XXX: Some callers pass '' as the packed_stat, and it seems to be
2459
# sometimes present in the dirstate - this seems oddly inconsistent.
2461
entry_index, present = self._find_entry_index(key, block)
2462
new_details = (minikind, fingerprint, size, executable, packed_stat)
2463
id_index = self._get_id_index()
2465
# new entry, synthesis cross reference here,
2466
existing_keys = id_index.setdefault(key[2], set())
2467
if not existing_keys:
2468
# not currently in the state, simplest case
2469
new_entry = key, [new_details] + self._empty_parent_info()
2471
# present at one or more existing other paths.
2472
# grab one of them and use it to generate parent
2473
# relocation/absent entries.
2474
new_entry = key, [new_details]
2475
for other_key in existing_keys:
2476
# change the record at other to be a pointer to this new
2477
# record. The loop looks similar to the change to
2478
# relocations when updating an existing record but its not:
2479
# the test for existing kinds is different: this can be
2480
# factored out to a helper though.
2481
other_block_index, present = self._find_block_index_from_key(other_key)
2483
raise AssertionError('could not find block for %s' % (other_key,))
2484
other_entry_index, present = self._find_entry_index(other_key,
2485
self._dirblocks[other_block_index][1])
2487
raise AssertionError('could not find entry for %s' % (other_key,))
2488
if path_utf8 is None:
2489
raise AssertionError('no path')
2490
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
2491
('r', path_utf8, 0, False, '')
2493
num_present_parents = self._num_present_parents()
2494
for lookup_index in xrange(1, num_present_parents + 1):
2495
# grab any one entry, use it to find the right path.
2496
# TODO: optimise this to reduce memory use in highly
2497
# fragmented situations by reusing the relocation
2499
update_block_index, present = \
2500
self._find_block_index_from_key(other_key)
2502
raise AssertionError('could not find block for %s' % (other_key,))
2503
update_entry_index, present = \
2504
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
2506
raise AssertionError('could not find entry for %s' % (other_key,))
2507
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
2508
if update_details[0] in 'ar': # relocated, absent
2509
# its a pointer or absent in lookup_index's tree, use
2511
new_entry[1].append(update_details)
2513
# we have the right key, make a pointer to it.
2514
pointer_path = osutils.pathjoin(*other_key[0:2])
2515
new_entry[1].append(('r', pointer_path, 0, False, ''))
2516
block.insert(entry_index, new_entry)
2517
existing_keys.add(key)
2519
# Does the new state matter?
2520
block[entry_index][1][0] = new_details
2521
# parents cannot be affected by what we do.
2522
# other occurences of this id can be found
2523
# from the id index.
2525
# tree index consistency: All other paths for this id in this tree
2526
# index must point to the correct path. We have to loop here because
2527
# we may have passed entries in the state with this file id already
2528
# that were absent - where parent entries are - and they need to be
2529
# converted to relocated.
2530
if path_utf8 is None:
2531
raise AssertionError('no path')
2532
for entry_key in id_index.setdefault(key[2], set()):
2533
# TODO:PROFILING: It might be faster to just update
2534
# rather than checking if we need to, and then overwrite
2535
# the one we are located at.
2536
if entry_key != key:
2537
# this file id is at a different path in one of the
2538
# other trees, so put absent pointers there
2539
# This is the vertical axis in the matrix, all pointing
2541
block_index, present = self._find_block_index_from_key(entry_key)
2543
raise AssertionError('not present: %r', entry_key)
2544
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
2546
raise AssertionError('not present: %r', entry_key)
2547
self._dirblocks[block_index][1][entry_index][1][0] = \
2548
('r', path_utf8, 0, False, '')
2549
# add a containing dirblock if needed.
2550
if new_details[0] == 'd':
2551
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
2552
block_index, present = self._find_block_index_from_key(subdir_key)
2554
self._dirblocks.insert(block_index, (subdir_key[0], []))
2556
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2558
def _validate(self):
2559
"""Check that invariants on the dirblock are correct.
2561
This can be useful in debugging; it shouldn't be necessary in
2564
This must be called with a lock held.
2566
# NOTE: This must always raise AssertionError not just assert,
2567
# otherwise it may not behave properly under python -O
2569
# TODO: All entries must have some content that's not 'a' or 'r',
2570
# otherwise it could just be removed.
2572
# TODO: All relocations must point directly to a real entry.
2574
# TODO: No repeated keys.
2577
from pprint import pformat
2578
self._read_dirblocks_if_needed()
2579
if len(self._dirblocks) > 0:
2580
if not self._dirblocks[0][0] == '':
2581
raise AssertionError(
2582
"dirblocks don't start with root block:\n" + \
2584
if len(self._dirblocks) > 1:
2585
if not self._dirblocks[1][0] == '':
2586
raise AssertionError(
2587
"dirblocks missing root directory:\n" + \
2589
# the dirblocks are sorted by their path components, name, and dir id
2590
dir_names = [d[0].split('/')
2591
for d in self._dirblocks[1:]]
2592
if dir_names != sorted(dir_names):
2593
raise AssertionError(
2594
"dir names are not in sorted order:\n" + \
2595
pformat(self._dirblocks) + \
2598
for dirblock in self._dirblocks:
2599
# within each dirblock, the entries are sorted by filename and
2601
for entry in dirblock[1]:
2602
if dirblock[0] != entry[0][0]:
2603
raise AssertionError(
2605
"doesn't match directory name in\n%r" %
2606
(entry, pformat(dirblock)))
2607
if dirblock[1] != sorted(dirblock[1]):
2608
raise AssertionError(
2609
"dirblock for %r is not sorted:\n%s" % \
2610
(dirblock[0], pformat(dirblock)))
2612
def check_valid_parent():
2613
"""Check that the current entry has a valid parent.
2615
This makes sure that the parent has a record,
2616
and that the parent isn't marked as "absent" in the
2617
current tree. (It is invalid to have a non-absent file in an absent
2620
if entry[0][0:2] == ('', ''):
2621
# There should be no parent for the root row
2623
parent_entry = self._get_entry(tree_index, path_utf8=entry[0][0])
2624
if parent_entry == (None, None):
2625
raise AssertionError(
2626
"no parent entry for: %s in tree %s"
2627
% (this_path, tree_index))
2628
if parent_entry[1][tree_index][0] != 'd':
2629
raise AssertionError(
2630
"Parent entry for %s is not marked as a valid"
2631
" directory. %s" % (this_path, parent_entry,))
2633
# For each file id, for each tree: either
2634
# the file id is not present at all; all rows with that id in the
2635
# key have it marked as 'absent'
2636
# OR the file id is present under exactly one name; any other entries
2637
# that mention that id point to the correct name.
2639
# We check this with a dict per tree pointing either to the present
2640
# name, or None if absent.
2641
tree_count = self._num_present_parents() + 1
2642
id_path_maps = [dict() for i in range(tree_count)]
2643
# Make sure that all renamed entries point to the correct location.
2644
for entry in self._iter_entries():
2645
file_id = entry[0][2]
2646
this_path = osutils.pathjoin(entry[0][0], entry[0][1])
2647
if len(entry[1]) != tree_count:
2648
raise AssertionError(
2649
"wrong number of entry details for row\n%s" \
2650
",\nexpected %d" % \
2651
(pformat(entry), tree_count))
2652
absent_positions = 0
2653
for tree_index, tree_state in enumerate(entry[1]):
2654
this_tree_map = id_path_maps[tree_index]
2655
minikind = tree_state[0]
2656
if minikind in 'ar':
2657
absent_positions += 1
2658
# have we seen this id before in this column?
2659
if file_id in this_tree_map:
2660
previous_path, previous_loc = this_tree_map[file_id]
2661
# any later mention of this file must be consistent with
2662
# what was said before
2664
if previous_path is not None:
2665
raise AssertionError(
2666
"file %s is absent in row %r but also present " \
2668
(file_id, entry, previous_path))
2669
elif minikind == 'r':
2670
target_location = tree_state[1]
2671
if previous_path != target_location:
2672
raise AssertionError(
2673
"file %s relocation in row %r but also at %r" \
2674
% (file_id, entry, previous_path))
2676
# a file, directory, etc - may have been previously
2677
# pointed to by a relocation, which must point here
2678
if previous_path != this_path:
2679
raise AssertionError(
2680
"entry %r inconsistent with previous path %r "
2682
(entry, previous_path, previous_loc))
2683
check_valid_parent()
2686
# absent; should not occur anywhere else
2687
this_tree_map[file_id] = None, this_path
2688
elif minikind == 'r':
2689
# relocation, must occur at expected location
2690
this_tree_map[file_id] = tree_state[1], this_path
2692
this_tree_map[file_id] = this_path, this_path
2693
check_valid_parent()
2694
if absent_positions == tree_count:
2695
raise AssertionError(
2696
"entry %r has no data for any tree." % (entry,))
2698
def _wipe_state(self):
2699
"""Forget all state information about the dirstate."""
2700
self._header_state = DirState.NOT_IN_MEMORY
2701
self._dirblock_state = DirState.NOT_IN_MEMORY
2702
self._changes_aborted = False
2705
self._dirblocks = []
2706
self._id_index = None
2707
self._packed_stat_index = None
2708
self._end_of_header = None
2709
self._cutoff_time = None
2710
self._split_path_cache = {}
2712
def lock_read(self):
2713
"""Acquire a read lock on the dirstate."""
2714
if self._lock_token is not None:
2715
raise errors.LockContention(self._lock_token)
2716
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2717
# already in memory, we could read just the header and check for
2718
# any modification. If not modified, we can just leave things
2720
self._lock_token = lock.ReadLock(self._filename)
2721
self._lock_state = 'r'
2722
self._state_file = self._lock_token.f
2725
def lock_write(self):
2726
"""Acquire a write lock on the dirstate."""
2727
if self._lock_token is not None:
2728
raise errors.LockContention(self._lock_token)
2729
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2730
# already in memory, we could read just the header and check for
2731
# any modification. If not modified, we can just leave things
2733
self._lock_token = lock.WriteLock(self._filename)
2734
self._lock_state = 'w'
2735
self._state_file = self._lock_token.f
2739
"""Drop any locks held on the dirstate."""
2740
if self._lock_token is None:
2741
raise errors.LockNotHeld(self)
2742
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2743
# already in memory, we could read just the header and check for
2744
# any modification. If not modified, we can just leave things
2746
self._state_file = None
2747
self._lock_state = None
2748
self._lock_token.unlock()
2749
self._lock_token = None
2750
self._split_path_cache = {}
2752
def _requires_lock(self):
2753
"""Check that a lock is currently held by someone on the dirstate."""
2754
if not self._lock_token:
2755
raise errors.ObjectNotLocked(self)
2758
# Try to load the compiled form if possible
2760
from bzrlib._dirstate_helpers_c import (
2761
_read_dirblocks_c as _read_dirblocks,
2762
bisect_dirblock_c as bisect_dirblock,
2763
_bisect_path_left_c as _bisect_path_left,
2764
_bisect_path_right_c as _bisect_path_right,
2765
cmp_by_dirs_c as cmp_by_dirs,
2768
from bzrlib._dirstate_helpers_py import (
2769
_read_dirblocks_py as _read_dirblocks,
2770
bisect_dirblock_py as bisect_dirblock,
2771
_bisect_path_left_py as _bisect_path_left,
2772
_bisect_path_right_py as _bisect_path_right,
2773
cmp_by_dirs_py as cmp_by_dirs,