1
# Copyright (C) 2006, 2007 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 2", NL;
33
full checksum = "adler32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", digit, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but its a file. The fingerprint is a
87
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
89
't' is a reference to a nested subtree; the fingerprint is the referenced
93
--- Format 1 had the following different definition: ---
94
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
95
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
97
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
98
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
101
PARENT ROW's are emitted for every parent that is not in the ghosts details
102
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
103
each row will have a PARENT ROW for foo and baz, but not for bar.
106
In any tree, a kind of 'moved' indicates that the fingerprint field
107
(which we treat as opaque data specific to the 'kind' anyway) has the
108
details for the id of this row in that tree.
110
I'm strongly tempted to add a id->path index as well, but I think that
111
where we need id->path mapping; we also usually read the whole file, so
112
I'm going to skip that for the moment, as we have the ability to locate
113
via bisect any path in any tree, and if we lookup things by path, we can
114
accumulate a id->path mapping as we go, which will tend to match what we
117
I plan to implement this asap, so please speak up now to alter/tweak the
118
design - and once we stabilise on this, I'll update the wiki page for
121
The rationale for all this is that we want fast operations for the
122
common case (diff/status/commit/merge on all files) and extremely fast
123
operations for the less common but still occurs a lot status/diff/commit
124
on specific files). Operations on specific files involve a scan for all
125
the children of a path, *in every involved tree*, which the current
126
format did not accommodate.
130
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
131
2) fall back current object model as needed.
132
3) scale usably to the largest trees known today - say 50K entries. (mozilla
133
is an example of this)
137
Eventually reuse dirstate objects across locks IFF the dirstate file has not
138
been modified, but will require that we flush/ignore cached stat-hit data
139
because we wont want to restat all files on disk just because a lock was
140
acquired, yet we cannot trust the data after the previous lock was released.
142
Memory representation:
143
vector of all directories, and vector of the childen ?
145
root_entrie = (direntry for root, [parent_direntries_for_root]),
147
('', ['data for achild', 'data for bchild', 'data for cchild'])
148
('dir', ['achild', 'cchild', 'echild'])
150
- single bisect to find N subtrees from a path spec
151
- in-order for serialisation - this is 'dirblock' grouping.
152
- insertion of a file '/a' affects only the '/' child-vector, that is, to
153
insert 10K elements from scratch does not generates O(N^2) memoves of a
154
single vector, rather each individual, which tends to be limited to a
155
manageable number. Will scale badly on trees with 10K entries in a
156
single directory. compare with Inventory.InventoryDirectory which has
157
a dictionary for the children. No bisect capability, can only probe for
158
exact matches, or grab all elements and sorta.
159
- Whats the risk of error here? Once we have the base format being processed
160
we should have a net win regardless of optimality. So we are going to
161
go with what seems reasonably.
164
maybe we should do a test profile of these core structure - 10K simulated searches/lookups/etc?
166
Objects for each row?
167
The lifetime of Dirstate objects is current per lock, but see above for
168
possible extensions. The lifetime of a row from a dirstate is expected to be
169
very short in the optimistic case: which we are optimising for. For instance,
170
subtree status will determine from analysis of the disk data what rows need to
171
be examined at all, and will be able to determine from a single row whether
172
that file has altered or not, so we are aiming to process tens of thousands of
173
entries each second within the dirstate context, before exposing anything to
174
the larger codebase. This suggests we want the time for a single file
175
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
176
processed, and to scale to 100 thousand we'll another order of magnitude to do
177
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
178
the file on disk, and then immediately discard, the overhead of object creation
179
becomes a significant cost.
181
Figures: Creating a tuple from from 3 elements was profiled at 0.0625
182
microseconds, whereas creating a object which is subclassed from tuple was
183
0.500 microseconds, and creating an object with 3 elements and slots was 3
184
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
185
down to 10 microseconds for the total processing - having 33% of that be object
186
creation is a huge overhead. There is a potential cost in using tuples within
187
each row which is that the conditional code to do comparisons may be slower
188
than method invocation, but method invocation is known to be slow due to stack
189
frame creation, so avoiding methods in these tight inner loops in unfortunately
190
desirable. We can consider a pyrex version of this with objects in future if
200
from stat import S_IEXEC
215
class _Bisector(object):
216
"""This just keeps track of information as we are bisecting."""
219
class DirState(object):
220
"""Record directory and metadata state for fast access.
222
A dirstate is a specialised data structure for managing local working
223
tree state information. Its not yet well defined whether it is platform
224
specific, and if it is how we detect/parameterise that.
227
_kind_to_minikind = {
233
'tree-reference': 't',
235
_minikind_to_kind = {'a':'absent', 'f':'file', 'd':'directory', 'l':'symlink',
237
't': 'tree-reference',
239
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
240
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
243
# TODO: jam 20070221 Make sure we handle when there are duplicated records
244
# (like when we remove + add the same path, or we have a rename)
245
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
246
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
247
# that we are sure a single record will always fit.
248
BISECT_PAGE_SIZE = 4096
251
IN_MEMORY_UNMODIFIED = 1
252
IN_MEMORY_MODIFIED = 2
254
# A pack_stat (the x's) that is just noise and will never match the output
257
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
259
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
260
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
262
def __init__(self, path):
263
"""Create a DirState object.
267
:attr _root_entrie: The root row of the directory/file information,
268
- contains the path to / - '', ''
269
- kind of 'directory',
270
- the file id of the root in utf8
273
- and no sha information.
274
:param path: The path at which the dirstate file on disk should live.
276
# _header_state and _dirblock_state represent the current state
277
# of the dirstate metadata and the per-row data respectiely.
278
# NOT_IN_MEMORY indicates that no data is in memory
279
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
280
# is the same as is on disk
281
# IN_MEMORY_MODIFIED indicates that we have a modified version
282
# of what is on disk.
283
# In future we will add more granularity, for instance _dirblock_state
284
# will probably support partially-in-memory as a separate variable,
285
# allowing for partially-in-memory unmodified and partially-in-memory
287
self._header_state = DirState.NOT_IN_MEMORY
288
self._dirblock_state = DirState.NOT_IN_MEMORY
292
self._state_file = None
293
self._filename = path
294
self._lock_token = None
295
self._lock_state = None
296
self._id_index = None
297
self._end_of_header = None
298
self._cutoff_time = None
299
self._split_path_cache = {}
300
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
302
def add(self, path, file_id, kind, stat, fingerprint):
303
"""Add a path to be tracked.
305
:param path: The path within the dirstate - '' is the root, 'foo' is the
306
path foo within the root, 'foo/bar' is the path bar within foo
308
:param file_id: The file id of the path being added.
309
:param kind: The kind of the path, as a string like 'file',
311
:param stat: The output of os.lstat for the path.
312
:param fingerprint: The sha value of the file,
313
or the target of a symlink,
314
or the referenced revision id for tree-references,
315
or '' for directories.
318
# find the block its in.
319
# find the location in the block.
320
# check its not there
322
#------- copied from inventory.make_entry
323
# --- normalized_filename wants a unicode basename only, so get one.
324
dirname, basename = osutils.split(path)
325
# we dont import normalized_filename directly because we want to be
326
# able to change the implementation at runtime for tests.
327
norm_name, can_access = osutils.normalized_filename(basename)
328
if norm_name != basename:
332
raise errors.InvalidNormalization(path)
333
# now that we've normalised, we need the correct utf8 path and
334
# dirname and basename elements. This single encode and split should be
335
# faster than three separate encodes.
336
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
337
dirname, basename = osutils.split(utf8path)
338
assert file_id.__class__ == str, \
339
"must be a utf8 file_id not %s" % (type(file_id))
340
# Make sure the file_id does not exist in this tree
341
file_id_entry = self._get_entry(0, fileid_utf8=file_id)
342
if file_id_entry != (None, None):
343
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
344
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
345
info = '%s:%s' % (kind, path)
346
raise errors.DuplicateFileId(file_id, info)
347
first_key = (dirname, basename, '')
348
block_index, present = self._find_block_index_from_key(first_key)
350
# check the path is not in the tree
351
block = self._dirblocks[block_index][1]
352
entry_index, _ = self._find_entry_index(first_key, block)
353
while (entry_index < len(block) and
354
block[entry_index][0][0:2] == first_key[0:2]):
355
if block[entry_index][1][0][0] not in 'ar':
356
# this path is in the dirstate in the current tree.
357
raise Exception, "adding already added path!"
360
# The block where we want to put the file is not present. But it
361
# might be because the directory was empty, or not loaded yet. Look
362
# for a parent entry, if not found, raise NotVersionedError
363
parent_dir, parent_base = osutils.split(dirname)
364
parent_block_idx, parent_entry_idx, _, parent_present = \
365
self._get_block_entry_index(parent_dir, parent_base, 0)
366
if not parent_present:
367
raise errors.NotVersionedError(path, str(self))
368
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
369
block = self._dirblocks[block_index][1]
370
entry_key = (dirname, basename, file_id)
373
packed_stat = DirState.NULLSTAT
376
packed_stat = pack_stat(stat)
377
parent_info = self._empty_parent_info()
378
minikind = DirState._kind_to_minikind[kind]
380
entry_data = entry_key, [
381
(minikind, fingerprint, size, False, packed_stat),
383
elif kind == 'directory':
384
entry_data = entry_key, [
385
(minikind, '', 0, False, packed_stat),
387
elif kind == 'symlink':
388
entry_data = entry_key, [
389
(minikind, fingerprint, size, False, packed_stat),
391
elif kind == 'tree-reference':
392
entry_data = entry_key, [
393
(minikind, fingerprint, 0, False, packed_stat),
396
raise errors.BzrError('unknown kind %r' % kind)
397
entry_index, present = self._find_entry_index(entry_key, block)
398
assert not present, "basename %r already added" % basename
399
block.insert(entry_index, entry_data)
401
if kind == 'directory':
402
# insert a new dirblock
403
self._ensure_block(block_index, entry_index, utf8path)
404
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
406
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
408
def _bisect(self, dir_name_list):
409
"""Bisect through the disk structure for specific rows.
411
:param dir_name_list: A list of (dir, name) pairs.
412
:return: A dict mapping (dir, name) => entry for found entries. Missing
413
entries will not be in the map.
415
self._requires_lock()
416
# We need the file pointer to be right after the initial header block
417
self._read_header_if_needed()
418
# If _dirblock_state was in memory, we should just return info from
419
# there, this function is only meant to handle when we want to read
421
assert self._dirblock_state == DirState.NOT_IN_MEMORY
423
# The disk representation is generally info + '\0\n\0' at the end. But
424
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
425
# Because it means we can sync on the '\n'
426
state_file = self._state_file
427
file_size = os.fstat(state_file.fileno()).st_size
428
# We end up with 2 extra fields, we should have a trailing '\n' to
429
# ensure that we read the whole record, and we should have a precursur
430
# '' which ensures that we start after the previous '\n'
431
entry_field_count = self._fields_per_entry() + 1
433
low = self._end_of_header
434
high = file_size - 1 # Ignore the final '\0'
435
# Map from (dir, name) => entry
438
# Avoid infinite seeking
439
max_count = 30*len(dir_name_list)
441
# pending is a list of places to look.
442
# each entry is a tuple of low, high, dir_names
443
# low -> the first byte offset to read (inclusive)
444
# high -> the last byte offset (inclusive)
445
# dir_names -> The list of (dir, name) pairs that should be found in
446
# the [low, high] range
447
pending = [(low, high, dir_name_list)]
449
page_size = self._bisect_page_size
451
fields_to_entry = self._get_fields_to_entry()
454
low, high, cur_files = pending.pop()
456
if not cur_files or low >= high:
461
if count > max_count:
462
raise errors.BzrError('Too many seeks, most likely a bug.')
464
mid = max(low, (low+high-page_size)/2)
467
# limit the read size, so we don't end up reading data that we have
469
read_size = min(page_size, (high-mid)+1)
470
block = state_file.read(read_size)
473
entries = block.split('\n')
476
# We didn't find a '\n', so we cannot have found any records.
477
# So put this range back and try again. But we know we have to
478
# increase the page size, because a single read did not contain
479
# a record break (so records must be larger than page_size)
481
pending.append((low, high, cur_files))
484
# Check the first and last entries, in case they are partial, or if
485
# we don't care about the rest of this page
487
first_fields = entries[0].split('\0')
488
if len(first_fields) < entry_field_count:
489
# We didn't get the complete first entry
490
# so move start, and grab the next, which
491
# should be a full entry
492
start += len(entries[0])+1
493
first_fields = entries[1].split('\0')
496
if len(first_fields) <= 2:
497
# We didn't even get a filename here... what do we do?
498
# Try a large page size and repeat this query
500
pending.append((low, high, cur_files))
503
# Find what entries we are looking for, which occur before and
504
# after this first record.
506
first_dir_name = (first_fields[1], first_fields[2])
507
first_loc = bisect.bisect_left(cur_files, first_dir_name)
509
# These exist before the current location
510
pre = cur_files[:first_loc]
511
# These occur after the current location, which may be in the
512
# data we read, or might be after the last entry
513
post = cur_files[first_loc:]
515
if post and len(first_fields) >= entry_field_count:
516
# We have files after the first entry
518
# Parse the last entry
519
last_entry_num = len(entries)-1
520
last_fields = entries[last_entry_num].split('\0')
521
if len(last_fields) < entry_field_count:
522
# The very last hunk was not complete,
523
# read the previous hunk
524
after = mid + len(block) - len(entries[-1])
526
last_fields = entries[last_entry_num].split('\0')
528
after = mid + len(block)
530
last_dir_name = (last_fields[1], last_fields[2])
531
last_loc = bisect.bisect_right(post, last_dir_name)
533
middle_files = post[:last_loc]
534
post = post[last_loc:]
537
# We have files that should occur in this block
538
# (>= first, <= last)
539
# Either we will find them here, or we can mark them as
542
if middle_files[0] == first_dir_name:
543
# We might need to go before this location
544
pre.append(first_dir_name)
545
if middle_files[-1] == last_dir_name:
546
post.insert(0, last_dir_name)
548
# Find out what paths we have
549
paths = {first_dir_name:[first_fields]}
550
# last_dir_name might == first_dir_name so we need to be
551
# careful if we should append rather than overwrite
552
if last_entry_num != first_entry_num:
553
paths.setdefault(last_dir_name, []).append(last_fields)
554
for num in xrange(first_entry_num+1, last_entry_num):
555
# TODO: jam 20070223 We are already splitting here, so
556
# shouldn't we just split the whole thing rather
557
# than doing the split again in add_one_record?
558
fields = entries[num].split('\0')
559
dir_name = (fields[1], fields[2])
560
paths.setdefault(dir_name, []).append(fields)
562
for dir_name in middle_files:
563
for fields in paths.get(dir_name, []):
564
# offset by 1 because of the opening '\0'
565
# consider changing fields_to_entry to avoid the
567
entry = fields_to_entry(fields[1:])
568
found.setdefault(dir_name, []).append(entry)
570
# Now we have split up everything into pre, middle, and post, and
571
# we have handled everything that fell in 'middle'.
572
# We add 'post' first, so that we prefer to seek towards the
573
# beginning, so that we will tend to go as early as we need, and
574
# then only seek forward after that.
576
pending.append((after, high, post))
578
pending.append((low, start-1, pre))
580
# Consider that we may want to return the directory entries in sorted
581
# order. For now, we just return them in whatever order we found them,
582
# and leave it up to the caller if they care if it is ordered or not.
585
def _bisect_dirblocks(self, dir_list):
586
"""Bisect through the disk structure to find entries in given dirs.
588
_bisect_dirblocks is meant to find the contents of directories, which
589
differs from _bisect, which only finds individual entries.
591
:param dir_list: An sorted list of directory names ['', 'dir', 'foo'].
592
:return: A map from dir => entries_for_dir
594
# TODO: jam 20070223 A lot of the bisecting logic could be shared
595
# between this and _bisect. It would require parameterizing the
596
# inner loop with a function, though. We should evaluate the
597
# performance difference.
598
self._requires_lock()
599
# We need the file pointer to be right after the initial header block
600
self._read_header_if_needed()
601
# If _dirblock_state was in memory, we should just return info from
602
# there, this function is only meant to handle when we want to read
604
assert self._dirblock_state == DirState.NOT_IN_MEMORY
606
# The disk representation is generally info + '\0\n\0' at the end. But
607
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
608
# Because it means we can sync on the '\n'
609
state_file = self._state_file
610
file_size = os.fstat(state_file.fileno()).st_size
611
# We end up with 2 extra fields, we should have a trailing '\n' to
612
# ensure that we read the whole record, and we should have a precursur
613
# '' which ensures that we start after the previous '\n'
614
entry_field_count = self._fields_per_entry() + 1
616
low = self._end_of_header
617
high = file_size - 1 # Ignore the final '\0'
618
# Map from dir => entry
621
# Avoid infinite seeking
622
max_count = 30*len(dir_list)
624
# pending is a list of places to look.
625
# each entry is a tuple of low, high, dir_names
626
# low -> the first byte offset to read (inclusive)
627
# high -> the last byte offset (inclusive)
628
# dirs -> The list of directories that should be found in
629
# the [low, high] range
630
pending = [(low, high, dir_list)]
632
page_size = self._bisect_page_size
634
fields_to_entry = self._get_fields_to_entry()
637
low, high, cur_dirs = pending.pop()
639
if not cur_dirs or low >= high:
644
if count > max_count:
645
raise errors.BzrError('Too many seeks, most likely a bug.')
647
mid = max(low, (low+high-page_size)/2)
650
# limit the read size, so we don't end up reading data that we have
652
read_size = min(page_size, (high-mid)+1)
653
block = state_file.read(read_size)
656
entries = block.split('\n')
659
# We didn't find a '\n', so we cannot have found any records.
660
# So put this range back and try again. But we know we have to
661
# increase the page size, because a single read did not contain
662
# a record break (so records must be larger than page_size)
664
pending.append((low, high, cur_dirs))
667
# Check the first and last entries, in case they are partial, or if
668
# we don't care about the rest of this page
670
first_fields = entries[0].split('\0')
671
if len(first_fields) < entry_field_count:
672
# We didn't get the complete first entry
673
# so move start, and grab the next, which
674
# should be a full entry
675
start += len(entries[0])+1
676
first_fields = entries[1].split('\0')
679
if len(first_fields) <= 1:
680
# We didn't even get a dirname here... what do we do?
681
# Try a large page size and repeat this query
683
pending.append((low, high, cur_dirs))
686
# Find what entries we are looking for, which occur before and
687
# after this first record.
689
first_dir = first_fields[1]
690
first_loc = bisect.bisect_left(cur_dirs, first_dir)
692
# These exist before the current location
693
pre = cur_dirs[:first_loc]
694
# These occur after the current location, which may be in the
695
# data we read, or might be after the last entry
696
post = cur_dirs[first_loc:]
698
if post and len(first_fields) >= entry_field_count:
699
# We have records to look at after the first entry
701
# Parse the last entry
702
last_entry_num = len(entries)-1
703
last_fields = entries[last_entry_num].split('\0')
704
if len(last_fields) < entry_field_count:
705
# The very last hunk was not complete,
706
# read the previous hunk
707
after = mid + len(block) - len(entries[-1])
709
last_fields = entries[last_entry_num].split('\0')
711
after = mid + len(block)
713
last_dir = last_fields[1]
714
last_loc = bisect.bisect_right(post, last_dir)
716
middle_files = post[:last_loc]
717
post = post[last_loc:]
720
# We have files that should occur in this block
721
# (>= first, <= last)
722
# Either we will find them here, or we can mark them as
725
if middle_files[0] == first_dir:
726
# We might need to go before this location
727
pre.append(first_dir)
728
if middle_files[-1] == last_dir:
729
post.insert(0, last_dir)
731
# Find out what paths we have
732
paths = {first_dir:[first_fields]}
733
# last_dir might == first_dir so we need to be
734
# careful if we should append rather than overwrite
735
if last_entry_num != first_entry_num:
736
paths.setdefault(last_dir, []).append(last_fields)
737
for num in xrange(first_entry_num+1, last_entry_num):
738
# TODO: jam 20070223 We are already splitting here, so
739
# shouldn't we just split the whole thing rather
740
# than doing the split again in add_one_record?
741
fields = entries[num].split('\0')
742
paths.setdefault(fields[1], []).append(fields)
744
for cur_dir in middle_files:
745
for fields in paths.get(cur_dir, []):
746
# offset by 1 because of the opening '\0'
747
# consider changing fields_to_entry to avoid the
749
entry = fields_to_entry(fields[1:])
750
found.setdefault(cur_dir, []).append(entry)
752
# Now we have split up everything into pre, middle, and post, and
753
# we have handled everything that fell in 'middle'.
754
# We add 'post' first, so that we prefer to seek towards the
755
# beginning, so that we will tend to go as early as we need, and
756
# then only seek forward after that.
758
pending.append((after, high, post))
760
pending.append((low, start-1, pre))
764
def _bisect_recursive(self, dir_name_list):
765
"""Bisect for entries for all paths and their children.
767
This will use bisect to find all records for the supplied paths. It
768
will then continue to bisect for any records which are marked as
769
directories. (and renames?)
771
:param paths: A sorted list of (dir, name) pairs
772
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
773
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
775
# Map from (dir, name, file_id) => [tree_info]
778
found_dir_names = set()
780
# Directories that have been read
781
processed_dirs = set()
782
# Get the ball rolling with the first bisect for all entries.
783
newly_found = self._bisect(dir_name_list)
786
# Directories that need to be read
788
paths_to_search = set()
789
for entry_list in newly_found.itervalues():
790
for dir_name_id, trees_info in entry_list:
791
found[dir_name_id] = trees_info
792
found_dir_names.add(dir_name_id[:2])
794
for tree_info in trees_info:
795
minikind = tree_info[0]
798
# We already processed this one as a directory,
799
# we don't need to do the extra work again.
801
subdir, name, file_id = dir_name_id
802
path = osutils.pathjoin(subdir, name)
804
if path not in processed_dirs:
805
pending_dirs.add(path)
806
elif minikind == 'r':
807
# Rename, we need to directly search the target
808
# which is contained in the fingerprint column
809
dir_name = osutils.split(tree_info[1])
810
if dir_name[0] in pending_dirs:
811
# This entry will be found in the dir search
813
# TODO: We need to check if this entry has
814
# already been found. Otherwise we might be
815
# hitting infinite recursion.
816
if dir_name not in found_dir_names:
817
paths_to_search.add(dir_name)
818
# Now we have a list of paths to look for directly, and
819
# directory blocks that need to be read.
820
# newly_found is mixing the keys between (dir, name) and path
821
# entries, but that is okay, because we only really care about the
823
newly_found = self._bisect(sorted(paths_to_search))
824
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
825
processed_dirs.update(pending_dirs)
828
def _empty_parent_info(self):
829
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
832
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
833
"""Ensure a block for dirname exists.
835
This function exists to let callers which know that there is a
836
directory dirname ensure that the block for it exists. This block can
837
fail to exist because of demand loading, or because a directory had no
838
children. In either case it is not an error. It is however an error to
839
call this if there is no parent entry for the directory, and thus the
840
function requires the coordinates of such an entry to be provided.
842
The root row is special cased and can be indicated with a parent block
845
:param parent_block_index: The index of the block in which dirname's row
847
:param parent_row_index: The index in the parent block where the row
849
:param dirname: The utf8 dirname to ensure there is a block for.
850
:return: The index for the block.
852
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
853
# This is the signature of the root row, and the
854
# contents-of-root row is always index 1
856
# the basename of the directory must be the end of its full name.
857
if not (parent_block_index == -1 and
858
parent_block_index == -1 and dirname == ''):
859
assert dirname.endswith(
860
self._dirblocks[parent_block_index][1][parent_row_index][0][1])
861
block_index, present = self._find_block_index_from_key((dirname, '', ''))
863
## In future, when doing partial parsing, this should load and
864
# populate the entire block.
865
self._dirblocks.insert(block_index, (dirname, []))
868
def _entries_to_current_state(self, new_entries):
869
"""Load new_entries into self.dirblocks.
871
Process new_entries into the current state object, making them the active
874
:param new_entries: A sorted list of entries. This function does not sort
875
to prevent unneeded overhead when callers have a sorted list already.
878
assert new_entries[0][0][0:2] == ('', ''), \
879
"Missing root row %r" % (new_entries[0][0],)
880
# The two blocks here are deliberate: the root block and the
881
# contents-of-root block.
882
self._dirblocks = [('', []), ('', [])]
883
current_block = self._dirblocks[0][1]
886
append_entry = current_block.append
887
for entry in new_entries:
888
if entry[0][0] != current_dirname:
889
# new block - different dirname
891
current_dirname = entry[0][0]
892
self._dirblocks.append((current_dirname, current_block))
893
append_entry = current_block.append
894
# append the entry to the current block
896
self._split_root_dirblock_into_contents()
898
def _split_root_dirblock_into_contents(self):
899
"""Split the root dirblocks into root and contents-of-root.
901
After parsing by path, we end up with root entries and contents-of-root
902
entries in the same block. This loop splits them out again.
904
# The above loop leaves the "root block" entries mixed with the
905
# "contents-of-root block". But we don't want an if check on
906
# all entries, so instead we just fix it up here.
907
assert self._dirblocks[1] == ('', [])
909
contents_of_root_block = []
910
for entry in self._dirblocks[0][1]:
911
if not entry[0][1]: # This is a root entry
912
root_block.append(entry)
914
contents_of_root_block.append(entry)
915
self._dirblocks[0] = ('', root_block)
916
self._dirblocks[1] = ('', contents_of_root_block)
918
def _entry_to_line(self, entry):
919
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
921
:param entry: An entry_tuple as defined in the module docstring.
923
entire_entry = list(entry[0])
924
for tree_number, tree_data in enumerate(entry[1]):
925
# (minikind, fingerprint, size, executable, tree_specific_string)
926
entire_entry.extend(tree_data)
927
# 3 for the key, 5 for the fields per tree.
928
tree_offset = 3 + tree_number * 5
930
entire_entry[tree_offset + 0] = tree_data[0]
932
entire_entry[tree_offset + 2] = str(tree_data[2])
934
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
935
return '\0'.join(entire_entry)
937
def _fields_per_entry(self):
938
"""How many null separated fields should be in each entry row.
940
Each line now has an extra '\n' field which is not used
941
so we just skip over it
944
+ number of fields per tree_data (5) * tree count
947
tree_count = 1 + self._num_present_parents()
948
return 3 + 5 * tree_count + 1
950
def _find_block(self, key, add_if_missing=False):
951
"""Return the block that key should be present in.
953
:param key: A dirstate entry key.
954
:return: The block tuple.
956
block_index, present = self._find_block_index_from_key(key)
958
if not add_if_missing:
959
# check to see if key is versioned itself - we might want to
960
# add it anyway, because dirs with no entries dont get a
961
# dirblock at parse time.
962
# This is an uncommon branch to take: most dirs have children,
963
# and most code works with versioned paths.
964
parent_base, parent_name = osutils.split(key[0])
965
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
966
# some parent path has not been added - its an error to add
968
raise errors.NotVersionedError(key[0:2], str(self))
969
self._dirblocks.insert(block_index, (key[0], []))
970
return self._dirblocks[block_index]
972
def _find_block_index_from_key(self, key):
973
"""Find the dirblock index for a key.
975
:return: The block index, True if the block for the key is present.
977
if key[0:2] == ('', ''):
979
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
980
cache=self._split_path_cache)
981
# _right returns one-past-where-key is so we have to subtract
982
# one to use it. we use _right here because there are two
983
# '' blocks - the root, and the contents of root
984
# we always have a minimum of 2 in self._dirblocks: root and
985
# root-contents, and for '', we get 2 back, so this is
986
# simple and correct:
987
present = (block_index < len(self._dirblocks) and
988
self._dirblocks[block_index][0] == key[0])
989
return block_index, present
991
def _find_entry_index(self, key, block):
992
"""Find the entry index for a key in a block.
994
:return: The entry index, True if the entry for the key is present.
996
entry_index = bisect.bisect_left(block, (key, []))
997
present = (entry_index < len(block) and
998
block[entry_index][0] == key)
999
return entry_index, present
1002
def from_tree(tree, dir_state_filename):
1003
"""Create a dirstate from a bzr Tree.
1005
:param tree: The tree which should provide parent information and
1007
:return: a DirState object which is currently locked for writing.
1008
(it was locked by DirState.initialize)
1010
result = DirState.initialize(dir_state_filename)
1014
parent_ids = tree.get_parent_ids()
1015
num_parents = len(parent_ids)
1017
for parent_id in parent_ids:
1018
parent_tree = tree.branch.repository.revision_tree(parent_id)
1019
parent_trees.append((parent_id, parent_tree))
1020
parent_tree.lock_read()
1021
result.set_parent_trees(parent_trees, [])
1022
result.set_state_from_inventory(tree.inventory)
1024
for revid, parent_tree in parent_trees:
1025
parent_tree.unlock()
1028
# The caller won't have a chance to unlock this, so make sure we
1034
def update_entry(self, entry, abspath, stat_value=None):
1035
"""Update the entry based on what is actually on disk.
1037
:param entry: This is the dirblock entry for the file in question.
1038
:param abspath: The path on disk for this file.
1039
:param stat_value: (optional) if we already have done a stat on the
1041
:return: The sha1 hexdigest of the file (40 bytes) or link target of a
1044
# This code assumes that the entry passed in is directly held in one of
1045
# the internal _dirblocks. So the dirblock state must have already been
1047
assert self._dirblock_state != DirState.NOT_IN_MEMORY
1048
if stat_value is None:
1050
# We could inline os.lstat but the common case is that
1051
# stat_value will be passed in, not read here.
1052
stat_value = self._lstat(abspath, entry)
1053
except (OSError, IOError), e:
1054
if e.errno in (errno.ENOENT, errno.EACCES,
1056
# The entry is missing, consider it gone
1060
kind = osutils.file_kind_from_stat_mode(stat_value.st_mode)
1062
minikind = DirState._kind_to_minikind[kind]
1063
except KeyError: # Unknown kind
1065
packed_stat = pack_stat(stat_value)
1066
(saved_minikind, saved_link_or_sha1, saved_file_size,
1067
saved_executable, saved_packed_stat) = entry[1][0]
1069
if (minikind == saved_minikind
1070
and packed_stat == saved_packed_stat
1071
# size should also be in packed_stat
1072
and saved_file_size == stat_value.st_size):
1073
# The stat hasn't changed since we saved, so we can potentially
1074
# re-use the saved sha hash.
1078
if self._cutoff_time is None:
1079
self._sha_cutoff_time()
1081
if (stat_value.st_mtime < self._cutoff_time
1082
and stat_value.st_ctime < self._cutoff_time):
1083
# Return the existing fingerprint
1084
return saved_link_or_sha1
1086
# If we have gotten this far, that means that we need to actually
1087
# process this entry.
1090
link_or_sha1 = self._sha1_file(abspath, entry)
1091
executable = self._is_executable(stat_value.st_mode,
1093
entry[1][0] = ('f', link_or_sha1, stat_value.st_size,
1094
executable, packed_stat)
1095
elif minikind == 'd':
1097
entry[1][0] = ('d', '', 0, False, packed_stat)
1098
if saved_minikind != 'd':
1099
# This changed from something into a directory. Make sure we
1100
# have a directory block for it. This doesn't happen very
1101
# often, so this doesn't have to be super fast.
1102
block_index, entry_index, dir_present, file_present = \
1103
self._get_block_entry_index(entry[0][0], entry[0][1], 0)
1104
self._ensure_block(block_index, entry_index,
1105
osutils.pathjoin(entry[0][0], entry[0][1]))
1106
elif minikind == 'l':
1107
link_or_sha1 = self._read_link(abspath, saved_link_or_sha1)
1108
entry[1][0] = ('l', link_or_sha1, stat_value.st_size,
1110
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1113
def _sha_cutoff_time(self):
1114
"""Return cutoff time.
1116
Files modified more recently than this time are at risk of being
1117
undetectably modified and so can't be cached.
1119
# Cache the cutoff time as long as we hold a lock.
1120
# time.time() isn't super expensive (approx 3.38us), but
1121
# when you call it 50,000 times it adds up.
1122
# For comparison, os.lstat() costs 7.2us if it is hot.
1123
self._cutoff_time = int(time.time()) - 3
1124
return self._cutoff_time
1126
def _lstat(self, abspath, entry):
1127
"""Return the os.lstat value for this path."""
1128
return os.lstat(abspath)
1130
def _sha1_file(self, abspath, entry):
1131
"""Calculate the SHA1 of a file by reading the full text"""
1132
f = file(abspath, 'rb', buffering=65000)
1134
return osutils.sha_file(f)
1138
def _is_executable(self, mode, old_executable):
1139
"""Is this file executable?"""
1140
return bool(S_IEXEC & mode)
1142
def _is_executable_win32(self, mode, old_executable):
1143
"""On win32 the executable bit is stored in the dirstate."""
1144
return old_executable
1146
if sys.platform == 'win32':
1147
_is_executable = _is_executable_win32
1149
def _read_link(self, abspath, old_link):
1150
"""Read the target of a symlink"""
1151
# TODO: jam 200700301 On Win32, this could just return the value
1152
# already in memory. However, this really needs to be done at a
1153
# higher level, because there either won't be anything on disk,
1154
# or the thing on disk will be a file.
1155
return os.readlink(abspath)
1157
def get_ghosts(self):
1158
"""Return a list of the parent tree revision ids that are ghosts."""
1159
self._read_header_if_needed()
1162
def get_lines(self):
1163
"""Serialise the entire dirstate to a sequence of lines."""
1164
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1165
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1166
# read whats on disk.
1167
self._state_file.seek(0)
1168
return self._state_file.readlines()
1170
lines.append(self._get_parents_line(self.get_parent_ids()))
1171
lines.append(self._get_ghosts_line(self._ghosts))
1172
# append the root line which is special cased
1173
lines.extend(map(self._entry_to_line, self._iter_entries()))
1174
return self._get_output_lines(lines)
1176
def _get_ghosts_line(self, ghost_ids):
1177
"""Create a line for the state file for ghost information."""
1178
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1180
def _get_parents_line(self, parent_ids):
1181
"""Create a line for the state file for parents information."""
1182
return '\0'.join([str(len(parent_ids))] + parent_ids)
1184
def _get_fields_to_entry(self):
1185
"""Get a function which converts entry fields into a entry record.
1187
This handles size and executable, as well as parent records.
1189
:return: A function which takes a list of fields, and returns an
1190
appropriate record for storing in memory.
1192
# This is intentionally unrolled for performance
1193
num_present_parents = self._num_present_parents()
1194
if num_present_parents == 0:
1195
def fields_to_entry_0_parents(fields, _int=int):
1196
path_name_file_id_key = (fields[0], fields[1], fields[2])
1197
return (path_name_file_id_key, [
1199
fields[3], # minikind
1200
fields[4], # fingerprint
1201
_int(fields[5]), # size
1202
fields[6] == 'y', # executable
1203
fields[7], # packed_stat or revision_id
1205
return fields_to_entry_0_parents
1206
elif num_present_parents == 1:
1207
def fields_to_entry_1_parent(fields, _int=int):
1208
path_name_file_id_key = (fields[0], fields[1], fields[2])
1209
return (path_name_file_id_key, [
1211
fields[3], # minikind
1212
fields[4], # fingerprint
1213
_int(fields[5]), # size
1214
fields[6] == 'y', # executable
1215
fields[7], # packed_stat or revision_id
1218
fields[8], # minikind
1219
fields[9], # fingerprint
1220
_int(fields[10]), # size
1221
fields[11] == 'y', # executable
1222
fields[12], # packed_stat or revision_id
1225
return fields_to_entry_1_parent
1226
elif num_present_parents == 2:
1227
def fields_to_entry_2_parents(fields, _int=int):
1228
path_name_file_id_key = (fields[0], fields[1], fields[2])
1229
return (path_name_file_id_key, [
1231
fields[3], # minikind
1232
fields[4], # fingerprint
1233
_int(fields[5]), # size
1234
fields[6] == 'y', # executable
1235
fields[7], # packed_stat or revision_id
1238
fields[8], # minikind
1239
fields[9], # fingerprint
1240
_int(fields[10]), # size
1241
fields[11] == 'y', # executable
1242
fields[12], # packed_stat or revision_id
1245
fields[13], # minikind
1246
fields[14], # fingerprint
1247
_int(fields[15]), # size
1248
fields[16] == 'y', # executable
1249
fields[17], # packed_stat or revision_id
1252
return fields_to_entry_2_parents
1254
def fields_to_entry_n_parents(fields, _int=int):
1255
path_name_file_id_key = (fields[0], fields[1], fields[2])
1256
trees = [(fields[cur], # minikind
1257
fields[cur+1], # fingerprint
1258
_int(fields[cur+2]), # size
1259
fields[cur+3] == 'y', # executable
1260
fields[cur+4], # stat or revision_id
1261
) for cur in xrange(3, len(fields)-1, 5)]
1262
return path_name_file_id_key, trees
1263
return fields_to_entry_n_parents
1265
def get_parent_ids(self):
1266
"""Return a list of the parent tree ids for the directory state."""
1267
self._read_header_if_needed()
1268
return list(self._parents)
1270
def _get_block_entry_index(self, dirname, basename, tree_index):
1271
"""Get the coordinates for a path in the state structure.
1273
:param dirname: The utf8 dirname to lookup.
1274
:param basename: The utf8 basename to lookup.
1275
:param tree_index: The index of the tree for which this lookup should
1277
:return: A tuple describing where the path is located, or should be
1278
inserted. The tuple contains four fields: the block index, the row
1279
index, anda two booleans are True when the directory is present, and
1280
when the entire path is present. There is no guarantee that either
1281
coordinate is currently reachable unless the found field for it is
1282
True. For instance, a directory not present in the searched tree
1283
may be returned with a value one greater than the current highest
1284
block offset. The directory present field will always be True when
1285
the path present field is True. The directory present field does
1286
NOT indicate that the directory is present in the searched tree,
1287
rather it indicates that there are at least some files in some
1290
self._read_dirblocks_if_needed()
1291
key = dirname, basename, ''
1292
block_index, present = self._find_block_index_from_key(key)
1294
# no such directory - return the dir index and 0 for the row.
1295
return block_index, 0, False, False
1296
block = self._dirblocks[block_index][1] # access the entries only
1297
entry_index, present = self._find_entry_index(key, block)
1298
# linear search through present entries at this path to find the one
1300
while entry_index < len(block) and block[entry_index][0][1] == basename:
1301
if block[entry_index][1][tree_index][0] not in \
1302
('a', 'r'): # absent, relocated
1303
return block_index, entry_index, True, True
1305
return block_index, entry_index, True, False
1307
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None):
1308
"""Get the dirstate entry for path in tree tree_index
1310
If either file_id or path is supplied, it is used as the key to lookup.
1311
If both are supplied, the fastest lookup is used, and an error is
1312
raised if they do not both point at the same row.
1314
:param tree_index: The index of the tree we wish to locate this path
1315
in. If the path is present in that tree, the entry containing its
1316
details is returned, otherwise (None, None) is returned
1317
:param fileid_utf8: A utf8 file_id to look up.
1318
:param path_utf8: An utf8 path to be looked up.
1319
:return: The dirstate entry tuple for path, or (None, None)
1321
self._read_dirblocks_if_needed()
1322
if path_utf8 is not None:
1323
assert path_utf8.__class__ == str, 'path_utf8 is not a str: %s %s' % (type(path_utf8), path_utf8)
1324
# path lookups are faster
1325
dirname, basename = osutils.split(path_utf8)
1326
block_index, entry_index, dir_present, file_present = \
1327
self._get_block_entry_index(dirname, basename, tree_index)
1328
if not file_present:
1330
entry = self._dirblocks[block_index][1][entry_index]
1331
assert entry[0][2] and entry[1][tree_index][0] not in ('a', 'r'), 'unversioned entry?!?!'
1333
if entry[0][2] != fileid_utf8:
1334
raise errors.BzrError('integrity error ? : mismatching'
1335
' tree_index, file_id and path')
1338
assert fileid_utf8 is not None
1339
possible_keys = self._get_id_index().get(fileid_utf8, None)
1340
if not possible_keys:
1342
for key in possible_keys:
1343
block_index, present = \
1344
self._find_block_index_from_key(key)
1345
# strange, probably indicates an out of date
1346
# id index - for now, allow this.
1349
# WARNING: DO not change this code to use _get_block_entry_index
1350
# as that function is not suitable: it does not use the key
1351
# to lookup, and thus the wront coordinates are returned.
1352
block = self._dirblocks[block_index][1]
1353
entry_index, present = self._find_entry_index(key, block)
1355
entry = self._dirblocks[block_index][1][entry_index]
1356
if entry[1][tree_index][0] in 'fdl':
1357
# this is the result we are looking for: the
1358
# real home of this file_id in this tree.
1360
if entry[1][tree_index][0] == 'a':
1361
# there is no home for this entry in this tree
1363
assert entry[1][tree_index][0] == 'r'
1364
real_path = entry[1][tree_index][1]
1365
return self._get_entry(tree_index, fileid_utf8=fileid_utf8,
1366
path_utf8=real_path)
1370
def initialize(cls, path):
1371
"""Create a new dirstate on path.
1373
The new dirstate will be an empty tree - that is it has no parents,
1374
and only a root node - which has id ROOT_ID.
1376
The object will be write locked when returned to the caller,
1377
unless there was an exception in the writing, in which case it
1380
:param path: The name of the file for the dirstate.
1381
:return: A DirState object.
1383
# This constructs a new DirState object on a path, sets the _state_file
1384
# to a new empty file for that path. It then calls _set_data() with our
1385
# stock empty dirstate information - a root with ROOT_ID, no children,
1386
# and no parents. Finally it calls save() to ensure that this data will
1389
# root dir and root dir contents with no children.
1390
empty_tree_dirblocks = [('', []), ('', [])]
1391
# a new root directory, with a NULLSTAT.
1392
empty_tree_dirblocks[0][1].append(
1393
(('', '', inventory.ROOT_ID), [
1394
('d', '', 0, False, DirState.NULLSTAT),
1398
result._set_data([], empty_tree_dirblocks)
1405
def _inv_entry_to_details(self, inv_entry):
1406
"""Convert an inventory entry (from a revision tree) to state details.
1408
:param inv_entry: An inventory entry whose sha1 and link targets can be
1409
relied upon, and which has a revision set.
1410
:return: A details tuple - the details for a single tree at a path +
1413
kind = inv_entry.kind
1414
minikind = DirState._kind_to_minikind[kind]
1415
tree_data = inv_entry.revision
1416
assert len(tree_data) > 0, 'empty revision for the inv_entry.'
1417
if kind == 'directory':
1421
elif kind == 'symlink':
1422
fingerprint = inv_entry.symlink_target or ''
1425
elif kind == 'file':
1426
fingerprint = inv_entry.text_sha1 or ''
1427
size = inv_entry.text_size or 0
1428
executable = inv_entry.executable
1429
elif kind == 'tree-reference':
1430
fingerprint = inv_entry.reference_revision or ''
1434
raise Exception("can't pack %s" % inv_entry)
1435
return (minikind, fingerprint, size, executable, tree_data)
1437
def _iter_entries(self):
1438
"""Iterate over all the entries in the dirstate.
1440
Each yelt item is an entry in the standard format described in the
1441
docstring of bzrlib.dirstate.
1443
self._read_dirblocks_if_needed()
1444
for directory in self._dirblocks:
1445
for entry in directory[1]:
1448
def _get_id_index(self):
1449
"""Get an id index of self._dirblocks."""
1450
if self._id_index is None:
1452
for key, tree_details in self._iter_entries():
1453
id_index.setdefault(key[2], set()).add(key)
1454
self._id_index = id_index
1455
return self._id_index
1457
def _get_output_lines(self, lines):
1458
"""format lines for final output.
1460
:param lines: A sequece of lines containing the parents list and the
1463
output_lines = [DirState.HEADER_FORMAT_3]
1464
lines.append('') # a final newline
1465
inventory_text = '\0\n\0'.join(lines)
1466
output_lines.append('adler32: %s\n' % (zlib.adler32(inventory_text),))
1467
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
1468
num_entries = len(lines)-3
1469
output_lines.append('num_entries: %s\n' % (num_entries,))
1470
output_lines.append(inventory_text)
1473
def _make_deleted_row(self, fileid_utf8, parents):
1474
"""Return a deleted for for fileid_utf8."""
1475
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
1478
def _num_present_parents(self):
1479
"""The number of parent entries in each record row."""
1480
return len(self._parents) - len(self._ghosts)
1484
"""Construct a DirState on the file at path path.
1486
:return: An unlocked DirState object, associated with the given path.
1488
result = DirState(path)
1491
def _read_dirblocks_if_needed(self):
1492
"""Read in all the dirblocks from the file if they are not in memory.
1494
This populates self._dirblocks, and sets self._dirblock_state to
1495
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
1498
self._read_header_if_needed()
1499
if self._dirblock_state == DirState.NOT_IN_MEMORY:
1500
# move the _state_file pointer to after the header (in case bisect
1501
# has been called in the mean time)
1502
self._state_file.seek(self._end_of_header)
1503
text = self._state_file.read()
1504
# TODO: check the adler checksums. adler_measured = zlib.adler32(text)
1506
fields = text.split('\0')
1507
# Remove the last blank entry
1508
trailing = fields.pop()
1509
assert trailing == ''
1510
# consider turning fields into a tuple.
1512
# skip the first field which is the trailing null from the header.
1514
# Each line now has an extra '\n' field which is not used
1515
# so we just skip over it
1517
# 3 fields for the key
1518
# + number of fields per tree_data (5) * tree count
1520
num_present_parents = self._num_present_parents()
1521
tree_count = 1 + num_present_parents
1522
entry_size = self._fields_per_entry()
1523
expected_field_count = entry_size * self._num_entries
1524
if len(fields) - cur > expected_field_count:
1525
fields = fields[:expected_field_count + cur]
1526
trace.mutter('Unexpectedly long dirstate field count!')
1527
print "XXX: incorrectly truncated dirstate file bug triggered."
1528
field_count = len(fields)
1529
# this checks our adjustment, and also catches file too short.
1530
assert field_count - cur == expected_field_count, \
1531
'field count incorrect %s != %s, entry_size=%s, '\
1532
'num_entries=%s fields=%r' % (
1533
field_count - cur, expected_field_count, entry_size,
1534
self._num_entries, fields)
1536
if num_present_parents == 1:
1537
# Bind external functions to local names
1539
# We access all fields in order, so we can just iterate over
1540
# them. Grab an straight iterator over the fields. (We use an
1541
# iterator because we don't want to do a lot of additions, nor
1542
# do we want to do a lot of slicing)
1543
next = iter(fields).next
1544
# Move the iterator to the current position
1545
for x in xrange(cur):
1547
# The two blocks here are deliberate: the root block and the
1548
# contents-of-root block.
1549
self._dirblocks = [('', []), ('', [])]
1550
current_block = self._dirblocks[0][1]
1551
current_dirname = ''
1552
append_entry = current_block.append
1553
for count in xrange(self._num_entries):
1557
if dirname != current_dirname:
1558
# new block - different dirname
1560
current_dirname = dirname
1561
self._dirblocks.append((current_dirname, current_block))
1562
append_entry = current_block.append
1563
# we know current_dirname == dirname, so re-use it to avoid
1564
# creating new strings
1565
entry = ((current_dirname, name, file_id),
1568
next(), # fingerprint
1569
_int(next()), # size
1570
next() == 'y', # executable
1571
next(), # packed_stat or revision_id
1575
next(), # fingerprint
1576
_int(next()), # size
1577
next() == 'y', # executable
1578
next(), # packed_stat or revision_id
1582
assert trailing == '\n'
1583
# append the entry to the current block
1585
self._split_root_dirblock_into_contents()
1587
fields_to_entry = self._get_fields_to_entry()
1588
entries = [fields_to_entry(fields[pos:pos+entry_size])
1589
for pos in xrange(cur, field_count, entry_size)]
1590
self._entries_to_current_state(entries)
1591
# To convert from format 2 => format 3
1592
# self._dirblocks = sorted(self._dirblocks,
1593
# key=lambda blk:blk[0].split('/'))
1594
# To convert from format 3 => format 2
1595
# self._dirblocks = sorted(self._dirblocks)
1596
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1598
def _read_header(self):
1599
"""This reads in the metadata header, and the parent ids.
1601
After reading in, the file should be positioned at the null
1602
just before the start of the first record in the file.
1604
:return: (expected adler checksum, number of entries, parent list)
1606
self._read_prelude()
1607
parent_line = self._state_file.readline()
1608
info = parent_line.split('\0')
1609
num_parents = int(info[0])
1610
assert num_parents == len(info)-2, 'incorrect parent info line'
1611
self._parents = info[1:-1]
1613
ghost_line = self._state_file.readline()
1614
info = ghost_line.split('\0')
1615
num_ghosts = int(info[1])
1616
assert num_ghosts == len(info)-3, 'incorrect ghost info line'
1617
self._ghosts = info[2:-1]
1618
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1619
self._end_of_header = self._state_file.tell()
1621
def _read_header_if_needed(self):
1622
"""Read the header of the dirstate file if needed."""
1623
# inline this as it will be called a lot
1624
if not self._lock_token:
1625
raise errors.ObjectNotLocked(self)
1626
if self._header_state == DirState.NOT_IN_MEMORY:
1629
def _read_prelude(self):
1630
"""Read in the prelude header of the dirstate file
1632
This only reads in the stuff that is not connected to the adler
1633
checksum. The position will be correct to read in the rest of
1634
the file and check the checksum after this point.
1635
The next entry in the file should be the number of parents,
1636
and their ids. Followed by a newline.
1638
header = self._state_file.readline()
1639
assert header == DirState.HEADER_FORMAT_3, \
1640
'invalid header line: %r' % (header,)
1641
adler_line = self._state_file.readline()
1642
assert adler_line.startswith('adler32: '), 'missing adler32 checksum'
1643
self.adler_expected = int(adler_line[len('adler32: '):-1])
1644
num_entries_line = self._state_file.readline()
1645
assert num_entries_line.startswith('num_entries: '), 'missing num_entries line'
1646
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
1649
"""Save any pending changes created during this session.
1651
We reuse the existing file, because that prevents race conditions with
1652
file creation, and use oslocks on it to prevent concurrent modification
1653
and reads - because dirstates incremental data aggretation is not
1654
compatible with reading a modified file, and replacing a file in use by
1655
another process is impossible on windows.
1657
A dirstate in read only mode should be smart enough though to validate
1658
that the file has not changed, and otherwise discard its cache and
1659
start over, to allow for fine grained read lock duration, so 'status'
1660
wont block 'commit' - for example.
1662
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
1663
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
1665
if self._lock_state == 'w':
1666
out_file = self._state_file
1669
# Try to grab a write lock so that we can update the file.
1671
wlock = lock.WriteLock(self._filename)
1672
except (errors.LockError, errors.LockContention), e:
1673
# We couldn't grab the lock, so just leave things dirty in
1677
# This may be a read-only tree, or someone else may have a
1678
# ReadLock. so handle the case when we cannot grab a write
1680
if e.errno in (errno.ENOENT, errno.EPERM, errno.EACCES,
1682
# Ignore these errors and just don't save anything
1688
out_file.writelines(self.get_lines())
1691
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1692
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1694
if wlock is not None:
1697
def _set_data(self, parent_ids, dirblocks):
1698
"""Set the full dirstate data in memory.
1700
This is an internal function used to completely replace the objects
1701
in memory state. It puts the dirstate into state 'full-dirty'.
1703
:param parent_ids: A list of parent tree revision ids.
1704
:param dirblocks: A list containing one tuple for each directory in the
1705
tree. Each tuple contains the directory path and a list of entries
1706
found in that directory.
1708
# our memory copy is now authoritative.
1709
self._dirblocks = dirblocks
1710
self._header_state = DirState.IN_MEMORY_MODIFIED
1711
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1712
self._parents = list(parent_ids)
1713
self._id_index = None
1715
def set_path_id(self, path, new_id):
1716
"""Change the id of path to new_id in the current working tree.
1718
:param path: The path inside the tree to set - '' is the root, 'foo'
1719
is the path foo in the root.
1720
:param new_id: The new id to assign to the path. This must be a utf8
1721
file id (not unicode, and not None).
1723
# TODO: start warning here.
1724
assert new_id.__class__ == str
1725
self._read_dirblocks_if_needed()
1727
import pdb;pdb.set_trace()
1729
raise NotImplementedError(self.set_path_id)
1730
# TODO: check new id is unique
1731
entry = self._get_entry(0, path_utf8=path)
1732
if entry[0][2] == new_id:
1733
# Nothing to change.
1735
# mark the old path absent, and insert a new root path
1736
self._make_absent(entry)
1737
self.update_minimal(('', '', new_id), 'd',
1738
path_utf8='', packed_stat=entry[1][0][4])
1739
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1740
if self._id_index is not None:
1741
self._id_index.setdefault(new_id, set()).add(entry[0])
1743
def set_parent_trees(self, trees, ghosts):
1744
"""Set the parent trees for the dirstate.
1746
:param trees: A list of revision_id, tree tuples. tree must be provided
1747
even if the revision_id refers to a ghost: supply an empty tree in
1749
:param ghosts: A list of the revision_ids that are ghosts at the time
1752
# TODO: generate a list of parent indexes to preserve to save
1753
# processing specific parent trees. In the common case one tree will
1754
# be preserved - the left most parent.
1755
# TODO: if the parent tree is a dirstate, we might want to walk them
1756
# all by path in parallel for 'optimal' common-case performance.
1757
# generate new root row.
1758
self._read_dirblocks_if_needed()
1759
# TODO future sketch: Examine the existing parents to generate a change
1760
# map and then walk the new parent trees only, mapping them into the
1761
# dirstate. Walk the dirstate at the same time to remove unreferenced
1764
# sketch: loop over all entries in the dirstate, cherry picking
1765
# entries from the parent trees, if they are not ghost trees.
1766
# after we finish walking the dirstate, all entries not in the dirstate
1767
# are deletes, so we want to append them to the end as per the design
1768
# discussions. So do a set difference on ids with the parents to
1769
# get deletes, and add them to the end.
1770
# During the update process we need to answer the following questions:
1771
# - find other keys containing a fileid in order to create cross-path
1772
# links. We dont't trivially use the inventory from other trees
1773
# because this leads to either double touching, or to accessing
1775
# - find other keys containing a path
1776
# We accumulate each entry via this dictionary, including the root
1779
# we could do parallel iterators, but because file id data may be
1780
# scattered throughout, we dont save on index overhead: we have to look
1781
# at everything anyway. We can probably save cycles by reusing parent
1782
# data and doing an incremental update when adding an additional
1783
# parent, but for now the common cases are adding a new parent (merge),
1784
# and replacing completely (commit), and commit is more common: so
1785
# optimise merge later.
1787
# ---- start generation of full tree mapping data
1788
# what trees should we use?
1789
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
1790
# how many trees do we end up with
1791
parent_count = len(parent_trees)
1793
# one: the current tree
1794
for entry in self._iter_entries():
1795
# skip entries not in the current tree
1796
if entry[1][0][0] in ('a', 'r'): # absent, relocated
1798
by_path[entry[0]] = [entry[1][0]] + \
1799
[DirState.NULL_PARENT_DETAILS] * parent_count
1800
id_index[entry[0][2]] = set([entry[0]])
1802
# now the parent trees:
1803
for tree_index, tree in enumerate(parent_trees):
1804
# the index is off by one, adjust it.
1805
tree_index = tree_index + 1
1806
# when we add new locations for a fileid we need these ranges for
1807
# any fileid in this tree as we set the by_path[id] to:
1808
# already_processed_tree_details + new_details + new_location_suffix
1809
# the suffix is from tree_index+1:parent_count+1.
1810
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
1811
# now stitch in all the entries from this tree
1812
for path, entry in tree.inventory.iter_entries_by_dir():
1813
# here we process each trees details for each item in the tree.
1814
# we first update any existing entries for the id at other paths,
1815
# then we either create or update the entry for the id at the
1816
# right path, and finally we add (if needed) a mapping from
1817
# file_id to this path. We do it in this order to allow us to
1818
# avoid checking all known paths for the id when generating a
1819
# new entry at this path: by adding the id->path mapping last,
1820
# all the mappings are valid and have correct relocation
1821
# records where needed.
1822
file_id = entry.file_id
1823
path_utf8 = path.encode('utf8')
1824
dirname, basename = osutils.split(path_utf8)
1825
new_entry_key = (dirname, basename, file_id)
1826
# tree index consistency: All other paths for this id in this tree
1827
# index must point to the correct path.
1828
for entry_key in id_index.setdefault(file_id, set()):
1829
# TODO:PROFILING: It might be faster to just update
1830
# rather than checking if we need to, and then overwrite
1831
# the one we are located at.
1832
if entry_key != new_entry_key:
1833
# this file id is at a different path in one of the
1834
# other trees, so put absent pointers there
1835
# This is the vertical axis in the matrix, all pointing
1837
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
1838
# by path consistency: Insert into an existing path record (trivial), or
1839
# add a new one with relocation pointers for the other tree indexes.
1840
if new_entry_key in id_index[file_id]:
1841
# there is already an entry where this data belongs, just insert it.
1842
by_path[new_entry_key][tree_index] = \
1843
self._inv_entry_to_details(entry)
1845
# add relocated entries to the horizontal axis - this row
1846
# mapping from path,id. We need to look up the correct path
1847
# for the indexes from 0 to tree_index -1
1849
for lookup_index in xrange(tree_index):
1850
# boundary case: this is the first occurence of file_id
1851
# so there are no id_indexs, possibly take this out of
1853
if not len(id_index[file_id]):
1854
new_details.append(DirState.NULL_PARENT_DETAILS)
1856
# grab any one entry, use it to find the right path.
1857
# TODO: optimise this to reduce memory use in highly
1858
# fragmented situations by reusing the relocation
1860
a_key = iter(id_index[file_id]).next()
1861
if by_path[a_key][lookup_index][0] in ('r', 'a'):
1862
# its a pointer or missing statement, use it as is.
1863
new_details.append(by_path[a_key][lookup_index])
1865
# we have the right key, make a pointer to it.
1866
real_path = ('/'.join(a_key[0:2])).strip('/')
1867
new_details.append(('r', real_path, 0, False, ''))
1868
new_details.append(self._inv_entry_to_details(entry))
1869
new_details.extend(new_location_suffix)
1870
by_path[new_entry_key] = new_details
1871
id_index[file_id].add(new_entry_key)
1872
# --- end generation of full tree mappings
1874
# sort and output all the entries
1875
new_entries = sorted(by_path.items(),
1876
key=lambda entry:(entry[0][0].split('/'), entry[0][1]))
1877
self._entries_to_current_state(new_entries)
1878
self._parents = [rev_id for rev_id, tree in trees]
1879
self._ghosts = list(ghosts)
1880
self._header_state = DirState.IN_MEMORY_MODIFIED
1881
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1882
self._id_index = id_index
1884
def set_state_from_inventory(self, new_inv):
1885
"""Set new_inv as the current state.
1887
This API is called by tree transform, and will usually occur with
1888
existing parent trees.
1890
:param new_inv: The inventory object to set current state from.
1892
self._read_dirblocks_if_needed()
1894
# incremental algorithm:
1895
# two iterators: current data and new data, both in dirblock order.
1896
new_iterator = new_inv.iter_entries_by_dir()
1897
# we will be modifying the dirstate, so we need a stable iterator. In
1898
# future we might write one, for now we just clone the state into a
1899
# list - which is a shallow copy, so each
1900
old_iterator = iter(list(self._iter_entries()))
1901
# both must have roots so this is safe:
1902
current_new = new_iterator.next()
1903
current_old = old_iterator.next()
1904
def advance(iterator):
1906
return iterator.next()
1907
except StopIteration:
1909
while current_new or current_old:
1910
# skip entries in old that are not really there
1911
if current_old and current_old[1][0][0] in ('r', 'a'):
1912
# relocated or absent
1913
current_old = advance(old_iterator)
1916
# convert new into dirblock style
1917
new_path_utf8 = current_new[0].encode('utf8')
1918
new_dirname, new_basename = osutils.split(new_path_utf8)
1919
new_id = current_new[1].file_id
1920
new_entry_key = (new_dirname, new_basename, new_id)
1921
current_new_minikind = \
1922
DirState._kind_to_minikind[current_new[1].kind]
1924
# for safety disable variables
1925
new_path_utf8 = new_dirname = new_basename = new_id = new_entry_key = None
1926
# 5 cases, we dont have a value that is strictly greater than everything, so
1927
# we make both end conditions explicit
1929
# old is finished: insert current_new into the state.
1930
self.update_minimal(new_entry_key, current_new_minikind,
1931
executable=current_new[1].executable,
1932
path_utf8=new_path_utf8)
1933
current_new = advance(new_iterator)
1934
elif not current_new:
1936
self._make_absent(current_old)
1937
current_old = advance(old_iterator)
1938
elif new_entry_key == current_old[0]:
1939
# same - common case
1940
# TODO: update the record if anything significant has changed.
1941
# the minimal required trigger is if the execute bit or cached
1943
if (current_old[1][0][3] != current_new[1].executable or
1944
current_old[1][0][0] != current_new_minikind):
1945
self.update_minimal(current_old[0], current_new_minikind,
1946
executable=current_new[1].executable,
1947
path_utf8=new_path_utf8)
1948
# both sides are dealt with, move on
1949
current_old = advance(old_iterator)
1950
current_new = advance(new_iterator)
1951
elif new_entry_key < current_old[0]:
1953
# add a entry for this and advance new
1954
self.update_minimal(new_entry_key, current_new_minikind,
1955
executable=current_new[1].executable,
1956
path_utf8=new_path_utf8)
1957
current_new = advance(new_iterator)
1960
self._make_absent(current_old)
1961
current_old = advance(old_iterator)
1962
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1963
self._id_index = None
1965
def _make_absent(self, current_old):
1966
"""Mark current_old - an entry - as absent for tree 0.
1968
:return: True if this was the last details entry for they entry key:
1969
that is, if the underlying block has had the entry removed, thus
1970
shrinking in length.
1972
# build up paths that this id will be left at after the change is made,
1973
# so we can update their cross references in tree 0
1974
all_remaining_keys = set()
1975
# Dont check the working tree, because its going.
1976
for details in current_old[1][1:]:
1977
if details[0] not in ('a', 'r'): # absent, relocated
1978
all_remaining_keys.add(current_old[0])
1979
elif details[0] == 'r': # relocated
1980
# record the key for the real path.
1981
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
1982
# absent rows are not present at any path.
1983
last_reference = current_old[0] not in all_remaining_keys
1985
# the current row consists entire of the current item (being marked
1986
# absent), and relocated or absent entries for the other trees:
1987
# Remove it, its meaningless.
1988
block = self._find_block(current_old[0])
1989
entry_index, present = self._find_entry_index(current_old[0], block[1])
1990
assert present, 'could not find entry for %s' % (current_old,)
1991
block[1].pop(entry_index)
1992
# if we have an id_index in use, remove this key from it for this id.
1993
if self._id_index is not None:
1994
self._id_index[current_old[0][2]].remove(current_old[0])
1995
# update all remaining keys for this id to record it as absent. The
1996
# existing details may either be the record we are making as deleted
1997
# (if there were other trees with the id present at this path), or may
1999
for update_key in all_remaining_keys:
2000
update_block_index, present = \
2001
self._find_block_index_from_key(update_key)
2002
assert present, 'could not find block for %s' % (update_key,)
2003
update_entry_index, present = \
2004
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
2005
assert present, 'could not find entry for %s' % (update_key,)
2006
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
2007
# it must not be absent at the moment
2008
assert update_tree_details[0][0] != 'a' # absent
2009
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
2010
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2011
return last_reference
2013
def update_minimal(self, key, minikind, executable=False, fingerprint='',
2014
packed_stat=None, size=0, path_utf8=None):
2015
"""Update an entry to the state in tree 0.
2017
This will either create a new entry at 'key' or update an existing one.
2018
It also makes sure that any other records which might mention this are
2021
:param key: (dir, name, file_id) for the new entry
2022
:param minikind: The type for the entry ('f' == 'file', 'd' ==
2024
:param executable: Should the executable bit be set?
2025
:param fingerprint: Simple fingerprint for new entry.
2026
:param packed_stat: packed stat value for new entry.
2027
:param size: Size information for new entry
2028
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
2031
block = self._find_block(key)[1]
2032
if packed_stat is None:
2033
packed_stat = DirState.NULLSTAT
2034
entry_index, present = self._find_entry_index(key, block)
2035
new_details = (minikind, fingerprint, size, executable, packed_stat)
2036
id_index = self._get_id_index()
2038
# new entry, synthesis cross reference here,
2039
existing_keys = id_index.setdefault(key[2], set())
2040
if not existing_keys:
2041
# not currently in the state, simplest case
2042
new_entry = key, [new_details] + self._empty_parent_info()
2044
# present at one or more existing other paths.
2045
# grab one of them and use it to generate parent
2046
# relocation/absent entries.
2047
new_entry = key, [new_details]
2048
for other_key in existing_keys:
2049
# change the record at other to be a pointer to this new
2050
# record. The loop looks similar to the change to
2051
# relocations when updating an existing record but its not:
2052
# the test for existing kinds is different: this can be
2053
# factored out to a helper though.
2054
other_block_index, present = self._find_block_index_from_key(other_key)
2056
import pdb; pdb.set_trace()
2057
assert present, 'could not find block for %s' % (other_key,)
2058
other_entry_index, present = self._find_entry_index(other_key,
2059
self._dirblocks[other_block_index][1])
2061
import pdb; pdb.set_trace()
2062
assert present, 'could not find entry for %s' % (other_key,)
2063
assert path_utf8 is not None
2064
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
2065
('r', path_utf8, 0, False, '')
2067
num_present_parents = self._num_present_parents()
2068
for lookup_index in xrange(1, num_present_parents + 1):
2069
# grab any one entry, use it to find the right path.
2070
# TODO: optimise this to reduce memory use in highly
2071
# fragmented situations by reusing the relocation
2073
update_block_index, present = \
2074
self._find_block_index_from_key(other_key)
2075
assert present, 'could not find block for %s' % (other_key,)
2076
update_entry_index, present = \
2077
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
2078
assert present, 'could not find entry for %s' % (other_key,)
2079
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
2080
if update_details[0] in ('r', 'a'): # relocated, absent
2081
# its a pointer or absent in lookup_index's tree, use
2083
new_entry[1].append(update_details)
2085
# we have the right key, make a pointer to it.
2086
pointer_path = osutils.pathjoin(*other_key[0:2])
2087
new_entry[1].append(('r', pointer_path, 0, False, ''))
2088
block.insert(entry_index, new_entry)
2089
existing_keys.add(key)
2091
# Does the new state matter?
2092
block[entry_index][1][0] = new_details
2093
# parents cannot be affected by what we do.
2094
# other occurences of this id can be found
2095
# from the id index.
2097
# tree index consistency: All other paths for this id in this tree
2098
# index must point to the correct path. We have to loop here because
2099
# we may have passed entries in the state with this file id already
2100
# that were absent - where parent entries are - and they need to be
2101
# converted to relocated.
2102
assert path_utf8 is not None
2103
for entry_key in id_index.setdefault(key[2], set()):
2104
# TODO:PROFILING: It might be faster to just update
2105
# rather than checking if we need to, and then overwrite
2106
# the one we are located at.
2107
if entry_key != key:
2108
# this file id is at a different path in one of the
2109
# other trees, so put absent pointers there
2110
# This is the vertical axis in the matrix, all pointing
2112
block_index, present = self._find_block_index_from_key(entry_key)
2114
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
2116
self._dirblocks[block_index][1][entry_index][1][0] = \
2117
('r', path_utf8, 0, False, '')
2118
# add a containing dirblock if needed.
2119
if new_details[0] == 'd':
2120
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
2121
block_index, present = self._find_block_index_from_key(subdir_key)
2123
self._dirblocks.insert(block_index, (subdir_key[0], []))
2125
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2127
def _validate(self):
2128
"""Check that invariants on the dirblock are correct.
2130
This can be useful in debugging; it shouldn't be necessary in
2133
from pprint import pformat
2134
if len(self._dirblocks) > 0:
2135
assert self._dirblocks[0][0] == '', \
2136
"dirblocks don't start with root block:\n" + \
2138
if len(self._dirblocks) > 1:
2139
assert self._dirblocks[1][0] == '', \
2140
"dirblocks missing root directory:\n" + \
2142
assert self._dirblocks[1:] == sorted(self._dirblocks[1:]), \
2143
"dirblocks are not in sorted order:\n" + \
2144
pformat(self._dirblocks)
2145
for dirblock in self._dirblocks:
2146
assert dirblock[1] == sorted(dirblock[1]), \
2147
"dirblock for %r is not sorted:\n%s" % \
2148
(dirblock[0], pformat(dirblock))
2150
def _wipe_state(self):
2151
"""Forget all state information about the dirstate."""
2152
self._header_state = DirState.NOT_IN_MEMORY
2153
self._dirblock_state = DirState.NOT_IN_MEMORY
2156
self._dirblocks = []
2157
self._id_index = None
2158
self._end_of_header = None
2159
self._cutoff_time = None
2160
self._split_path_cache = {}
2162
def lock_read(self):
2163
"""Acquire a read lock on the dirstate"""
2164
if self._lock_token is not None:
2165
raise errors.LockContention(self._lock_token)
2166
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2167
# already in memory, we could read just the header and check for
2168
# any modification. If not modified, we can just leave things
2170
self._lock_token = lock.ReadLock(self._filename)
2171
self._lock_state = 'r'
2172
self._state_file = self._lock_token.f
2175
def lock_write(self):
2176
"""Acquire a write lock on the dirstate"""
2177
if self._lock_token is not None:
2178
raise errors.LockContention(self._lock_token)
2179
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2180
# already in memory, we could read just the header and check for
2181
# any modification. If not modified, we can just leave things
2183
self._lock_token = lock.WriteLock(self._filename)
2184
self._lock_state = 'w'
2185
self._state_file = self._lock_token.f
2189
"""Drop any locks held on the dirstate"""
2190
if self._lock_token is None:
2191
raise errors.LockNotHeld(self)
2192
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2193
# already in memory, we could read just the header and check for
2194
# any modification. If not modified, we can just leave things
2196
self._state_file = None
2197
self._lock_state = None
2198
self._lock_token.unlock()
2199
self._lock_token = None
2200
self._split_path_cache = {}
2202
def _requires_lock(self):
2203
"""Checks that a lock is currently held by someone on the dirstate"""
2204
if not self._lock_token:
2205
raise errors.ObjectNotLocked(self)
2208
def bisect_dirblock(dirblocks, dirname, lo=0, hi=None, cache={}):
2209
"""Return the index where to insert dirname into the dirblocks.
2211
The return value idx is such that all directories blocks in dirblock[:idx]
2212
have names < dirname, and all blocks in dirblock[idx:] have names >=
2215
Optional args lo (default 0) and hi (default len(dirblocks)) bound the
2216
slice of a to be searched.
2221
dirname_split = cache[dirname]
2223
dirname_split = dirname.split('/')
2224
cache[dirname] = dirname_split
2227
# Grab the dirname for the current dirblock
2228
cur = dirblocks[mid][0]
2230
cur_split = cache[cur]
2232
cur_split = cur.split('/')
2233
cache[cur] = cur_split
2234
if cur_split < dirname_split: lo = mid+1
2240
def pack_stat(st, _encode=base64.encodestring, _pack=struct.pack):
2241
"""Convert stat values into a packed representation."""
2242
# jam 20060614 it isn't really worth removing more entries if we
2243
# are going to leave it in packed form.
2244
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
2245
# With all entries filesize is 5.9M and read time is mabye 280ms
2246
# well within the noise margin
2248
# base64.encode always adds a final newline, so strip it off
2249
return _encode(_pack('>llllll'
2250
, st.st_size, int(st.st_mtime), int(st.st_ctime)
2251
, st.st_dev, st.st_ino, st.st_mode))[:-1]