1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
|
Commit
======
The basic purpose of commit is to
1 - create and store a new revision based on the contents of the working tree
2 - make this the new basis revision for the working tree
We can do a selected commit of only some files or subtrees.
Minimum work
------------
The best performance we could hope for is:
- stat each versioned selected working file once
- read from the workingtree and write into the repository any new file texts
- in general, do work proportional to the size of the shape (eg
inventory) of the old and new selected trees, and to the total size of
the modified files
In more detail:
1.0 - Store new file texts: if a versioned file contains a new text
there is no avoiding storing it. To determine which ones have changed
we must go over the workingtree and at least stat each file. If the
file is modified since it was last hashed, it must be read in.
Ideally we would read it only once, and either notice that it has not
changed, or store it at that point.
On the other hand we want new code to be able to handle files that are
larger than will fit in memory. We may then need to read each file up
to two times: once to determine if there is a new text and calculate
its hash, and again to store it.
1.1 - Store a tree-shape description (ie inventory or similar.) This
describes the non-file objects, and provides a reference from the
Revision to the texts within it.
1.2 - Generate and store a new revision object.
1.3 - Do delta-compression on the stored objects. (git notably does
not do this at commit time, deferring this entirely until later.)
This requires finding the appropriate basis for each modified file: in
the current scheme we get the file id, last-revision from the
dirstate, look into the knit for that text, extract that text in
total, generate a delta, then store that into the knit. Most delta
operations are O(n^2) to O(n^3) in the size of the modified files.
1.4 - Cache annotation information for the changes: at the moment this
is done as part of the delta storage. There are some flaws in that
approach, such as that it is not updated when ghosts are filled, and
the annotation can't be re-run with new diff parameters.
2.1 - Make the new revision the basis for the tree, and clear the list
of parents. Strictly this is all that's logically necessary, unless
the working tree format requires more work.
The dirstate format does require more work, because it caches the
parent tree data for each file within the working tree data. In
practice this means that every commit rewrites the entire dirstate
file - we could try to avoid rewriting the whole file but this may be
difficult because variable-length data (the last-changed revision id)
is inserted into many rows.
The current dirstate design then seems to mean that any commit of a
single file imposes a cost proportional to the size of the current
workingtree. Maybe there are other benefits that outweigh this.
Alternatively if it was fast enough for operations to always look at
the original storage of the parent trees we could do without the
cache.
2.2 - Record the observed file hashes into the workingtree control
files. For the files that we just committed, we have the information
to store a valid hash cache entry: we know their stat information and
the sha1 of the file contents. This is not strictly necessary to the
speed of commit, but it will be useful later in avoiding reading those
files, and the only cost of doing it now is writing it out.
In fact there are some user interface niceties that complicate this:
3 - Before starting the commit proper, we prompt for a commit message
and in that commit message editor we show a list of the files that
will be committed: basically the output of bzr status. This is
basically the same as the list of changes we detect while storing the
commit, but because the user will sometimes change the tree after
opening the commit editor and expect the final state to be committed I
think we do have to look for changes twice. Since it takes the user a
while to enter a message this is not a big problem as long as both the
status summary and the commit are individually fast.
4 - As the commit proceeds (or after?) we show another status-like
summary. Just printing the names of modified files as they're stored
would be easy. Recording deleted and renamed files or directories is
more work: this can only be done by reference to the primary parent
tree and requires it be read in. Worse, reporting renames requires
searching by id across the entire parent tree. Possibly full
reporting should be a default-off verbose option because it does
require more work beyond the commit itself.
5 - Bazaar currently allows for missing files to be automatically
marked as removed at the time of commit. Leaving aside the ui
consequences, this means that we have to update the working inventory
to mark these files as removed. Since as discussed above we always
have to rewrite the dirstate on commit this is not substantial, though
we should make sure we do this in one pass, not two. I have
previously proposed to make this behaviour a non-default option.
We may need to run hooks or generate signatures during commit, but
they don't seem to have substantial performance consequences.
If one wanted to optimize solely for the speed of commit I think
hash-addressed file-per-text storage like in git (or bzr 0.1) is very
good. Remarkably, it does not need to read the inventory for the
previous revision. For each versioned file, we just need to get its
hash, either by reading the file or validating its stat data. If that
hash is not already in the repository, the file is just copied in and
compressed. As directories are traversed, they're turned into texts
and stored as well, and then finally the revision is too. This does
depend on later doing some delta compression of these texts.
Variations on this are possible. Rather than writing a single file
into the repository for each text, we could fold them into a single
collation or pack file. That would create a smaller number of files
in the repository, but looking up a single text would require looking
into their indexes rather than just asking the filesystem.
Rather than using hashes we can use file-id/rev-id pairs as at
present, which has several consequences pro and con.
Interface stack
---------------
The commit api is invoked by the command interface, and copies information
from the tree into the branch and its repository, possibly updating the
WorkingTree afterwards.
The command interface passes:
* a commit message (from an option, if any),
* or an indication that it should be read interactively from the ui object;
* a list of files to commit
* an option for a dry-run commit
* verbose option, or callback to indicate
* timestamp, timezone, committer, chosen revision id
* config (for what?)
* option for local-only commit on a bound branch
* option for strict commits (fail if there are unknown or missing files)
* option to allow "pointless" commits (with no tree changes)
>>> Branch.commit(from_tree, message, files_to_commit)
There will be different implementations of this for different Branch
classes, whether for foreign branches or Bazaar repositories using
different storage methods.
Most of the commit should occur during a single lockstep iteration across
the workingtree and parent trees. The WorkingTree interface needs to
provide methods that give commit all it needs. Some of these methods
(such as answering the file's last change revision) may be deprecated in
newer working trees and there we have a choice of either calculating the
value from the data that is present, or refusing to support commit to
newer repositories.
For a dirstate tree the iteration of changes from the parent can easily be
done within its own iter_changes.
XXX: We currently don't support selective-file commit of a merge; this
could be done if we decide how it should be recorded - is this to be
stored as an overall merge revision; as a preliminary non-merge revisions;
or will the per-file graph diverge from the revision graph.
Other things commit needs to do:
* check if there are any conflicts in the tree - if so, commit cannot
continue
* check if there are any unknown files, if --strict or automatic add is
turned on
* check the working tree basis version is up to date with the branch tip
* when automatically adding new files or deleting missing files during
commit, they must be noted during commit and written into the working
tree at some point
* refuse "pointless" commits with no file changes - should be easy by
just refusing to do the final step of storing a new overall inventory
and revision object
* heuristic detection of renames between add and delete (out of scope for
this change)
* pushing changes to a master branch if any
* running hooks, pre and post commit
* prompting for a commit message if necessary, including a list of the
changes that have already been observed
* if there are tree references and recursing into them is enabled, then
do so
Updates that need to be made in the working tree, either on conclusion
of commit or during the scan, include
* Changes made to the tree shape, including automatic adds, renames or
deletes
* For trees (eg dirstate) that cache parent inventories, the old parent
information must be removed and the new one inserted
* The tree hashcache information should be updated to reflect the stat
value at which the file was the same as the committed version. This
needs to be done carefully to prevent inconsistencies if the file is
modified during or shortly after the commit. Perhaps it would work to
read the mtime of the file before we read its text to commit.
Dirstate inventories may be most easily updated in a single operation at
the end; however it may be best to accumulate data as we proceed through
the tree rather than revisiting it at the end.
Showing a progress bar for commit may not be necessary if we report files
as they are committed. Alternatively we could transiently show a progress
bar for each directory that's scanned, even if no changes are observed.
This needs to collect a list of added/changed/removed files, each of which
must have its text stored (if any) and containing directory updated. This
can be done by calling Tree._iter_changes on the source tree, asking for
changes
In the 0.17 model the commit operation needs to know the per-file parents
and per-file last-changed revision.
XXX: If we want to retain explicitly stored per-file graphs, it would seem
that we do need to record per-file parents. We have not yet finally
settled that we do want to remove them or treat them as a cache. This api
stack is still ok whether we do or not, but the internals of it may
change.
(In this and other operations we must avoid having multiple layers walk
over the tree separately. For example, it is no good to have the Command
layer walk the tree to generate a list of all file ids to commit, because
the tree will also be walked later. The layers that do need to operate
per-file should probably be bound together in a per-dirblock iterator,
rather than each iterating independently.)
|