1 : /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
2 : * vim: set ts=8 sw=4 et tw=78:
3 : *
4 : * ***** BEGIN LICENSE BLOCK *****
5 : * Version: MPL 1.1/GPL 2.0/LGPL 2.1
6 : *
7 : * The contents of this file are subject to the Mozilla Public License Version
8 : * 1.1 (the "License"); you may not use this file except in compliance with
9 : * the License. You may obtain a copy of the License at
10 : * http://www.mozilla.org/MPL/
11 : *
12 : * Software distributed under the License is distributed on an "AS IS" basis,
13 : * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
14 : * for the specific language governing rights and limitations under the
15 : * License.
16 : *
17 : * The Original Code is SpiderMonkey global object code.
18 : *
19 : * The Initial Developer of the Original Code is
20 : * the Mozilla Foundation.
21 : * Portions created by the Initial Developer are Copyright (C) 2011
22 : * the Initial Developer. All Rights Reserved.
23 : *
24 : * Contributor(s):
25 : *
26 : * Alternatively, the contents of this file may be used under the terms of
27 : * either of the GNU General Public License Version 2 or later (the "GPL"),
28 : * or the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
29 : * in which case the provisions of the GPL or the LGPL are applicable instead
30 : * of those above. If you wish to allow use of your version of this file only
31 : * under the terms of either the GPL or the LGPL, and not to allow others to
32 : * use your version of this file under the terms of the MPL, indicate your
33 : * decision by deleting the provisions above and replace them with the notice
34 : * and other provisions required by the GPL or the LGPL. If you do not delete
35 : * the provisions above, a recipient may use your version of this file under
36 : * the terms of any one of the MPL, the GPL or the LGPL.
37 : *
38 : * ***** END LICENSE BLOCK ***** */
39 :
40 : #ifndef jsgc_barrier_h___
41 : #define jsgc_barrier_h___
42 :
43 : #include "jsapi.h"
44 : #include "jscell.h"
45 :
46 : #include "js/HashTable.h"
47 :
48 : /*
49 : * A write barrier is a mechanism used by incremental or generation GCs to
50 : * ensure that every value that needs to be marked is marked. In general, the
51 : * write barrier should be invoked whenever a write can cause the set of things
52 : * traced through by the GC to change. This includes:
53 : * - writes to object properties
54 : * - writes to array slots
55 : * - writes to fields like JSObject::shape_ that we trace through
56 : * - writes to fields in private data, like JSGenerator::obj
57 : * - writes to non-markable fields like JSObject::private that point to
58 : * markable data
59 : * The last category is the trickiest. Even though the private pointers does not
60 : * point to a GC thing, changing the private pointer may change the set of
61 : * objects that are traced by the GC. Therefore it needs a write barrier.
62 : *
63 : * Every barriered write should have the following form:
64 : * <pre-barrier>
65 : * obj->field = value; // do the actual write
66 : * <post-barrier>
67 : * The pre-barrier is used for incremental GC and the post-barrier is for
68 : * generational GC.
69 : *
70 : * PRE-BARRIER
71 : *
72 : * To understand the pre-barrier, let's consider how incremental GC works. The
73 : * GC itself is divided into "slices". Between each slice, JS code is allowed to
74 : * run. Each slice should be short so that the user doesn't notice the
75 : * interruptions. In our GC, the structure of the slices is as follows:
76 : *
77 : * 1. ... JS work, which leads to a request to do GC ...
78 : * 2. [first GC slice, which performs all root marking and possibly more marking]
79 : * 3. ... more JS work is allowed to run ...
80 : * 4. [GC mark slice, which runs entirely in drainMarkStack]
81 : * 5. ... more JS work ...
82 : * 6. [GC mark slice, which runs entirely in drainMarkStack]
83 : * 7. ... more JS work ...
84 : * 8. [GC marking finishes; sweeping done non-incrementally; GC is done]
85 : * 9. ... JS continues uninterrupted now that GC is finishes ...
86 : *
87 : * Of course, there may be a different number of slices depending on how much
88 : * marking is to be done.
89 : *
90 : * The danger inherent in this scheme is that the JS code in steps 3, 5, and 7
91 : * might change the heap in a way that causes the GC to collect an object that
92 : * is actually reachable. The write barrier prevents this from happening. We use
93 : * a variant of incremental GC called "snapshot at the beginning." This approach
94 : * guarantees the invariant that if an object is reachable in step 2, then we
95 : * will mark it eventually. The name comes from the idea that we take a
96 : * theoretical "snapshot" of all reachable objects in step 2; all objects in
97 : * that snapshot should eventually be marked. (Note that the write barrier
98 : * verifier code takes an actual snapshot.)
99 : *
100 : * The basic correctness invariant of a snapshot-at-the-beginning collector is
101 : * that any object reachable at the end of the GC (step 9) must either:
102 : * (1) have been reachable at the beginning (step 2) and thus in the snapshot
103 : * (2) or must have been newly allocated, in steps 3, 5, or 7.
104 : * To deal with case (2), any objects allocated during an incremental GC are
105 : * automatically marked black.
106 : *
107 : * This strategy is actually somewhat conservative: if an object becomes
108 : * unreachable between steps 2 and 8, it would be safe to collect it. We won't,
109 : * mainly for simplicity. (Also, note that the snapshot is entirely
110 : * theoretical. We don't actually do anything special in step 2 that we wouldn't
111 : * do in a non-incremental GC.
112 : *
113 : * It's the pre-barrier's job to maintain the snapshot invariant. Consider the
114 : * write "obj->field = value". Let the prior value of obj->field be
115 : * value0. Since it's possible that value0 may have been what obj->field
116 : * contained in step 2, when the snapshot was taken, the barrier marks
117 : * value0. Note that it only does this if we're in the middle of an incremental
118 : * GC. Since this is rare, the cost of the write barrier is usually just an
119 : * extra branch.
120 : *
121 : * In practice, we implement the pre-barrier differently based on the type of
122 : * value0. E.g., see JSObject::writeBarrierPre, which is used if obj->field is
123 : * a JSObject*. It takes value0 as a parameter.
124 : *
125 : * POST-BARRIER
126 : *
127 : * These are not yet implemented. Once we get generational GC, they will allow
128 : * us to keep track of pointers from non-nursery space into the nursery.
129 : *
130 : * IMPLEMENTATION DETAILS
131 : *
132 : * Since it would be awkward to change every write to memory into a function
133 : * call, this file contains a bunch of C++ classes and templates that use
134 : * operator overloading to take care of barriers automatically. In many cases,
135 : * all that's necessary to make some field be barriered is to replace
136 : * Type *field;
137 : * with
138 : * HeapPtr<Type> field;
139 : * There are also special classes HeapValue and HeapId, which barrier js::Value
140 : * and jsid, respectively.
141 : *
142 : * One additional note: not all object writes need to be barriered. Writes to
143 : * newly allocated objects do not need a barrier as long as the GC is not
144 : * allowed to run in between the allocation and the write. In these cases, we
145 : * use the "obj->field.init(value)" method instead of "obj->field = value".
146 : * We use the init naming idiom in many places to signify that a field is being
147 : * assigned for the first time, and that no GCs have taken place between the
148 : * object allocation and the assignment.
149 : */
150 :
151 : struct JSXML;
152 :
153 : namespace js {
154 :
155 : /*
156 : * Ideally, we would like to make the argument to functions like MarkShape be a
157 : * HeapPtr<const js::Shape>. That would ensure that we don't forget to
158 : * barrier any fields that we mark through. However, that would prohibit us from
159 : * passing in a derived class like HeapPtr<js::EmptyShape>.
160 : *
161 : * To overcome the problem, we make the argument to MarkShape be a
162 : * MarkablePtr<const js::Shape>. And we allow conversions from HeapPtr<T>
163 : * to MarkablePtr<U> as long as T can be converted to U.
164 : */
165 : template<class T>
166 : class MarkablePtr
167 : {
168 : public:
169 : T *value;
170 :
171 : explicit MarkablePtr(T *value) : value(value) {}
172 : };
173 :
174 : template<class T, typename Unioned = uintptr_t>
175 : class HeapPtr
176 : {
177 : union {
178 : T *value;
179 : Unioned other;
180 : };
181 :
182 : public:
183 26974037 : HeapPtr() : value(NULL) {}
184 77300169 : explicit HeapPtr(T *v) : value(v) { post(); }
185 6294768 : explicit HeapPtr(const HeapPtr<T> &v) : value(v.value) { post(); }
186 :
187 27219854 : ~HeapPtr() { pre(); }
188 :
189 : /* Use this to install a ptr into a newly allocated object. */
190 115422524 : void init(T *v) {
191 115422524 : JS_ASSERT(!IsPoisonedPtr<T>(v));
192 115422524 : value = v;
193 115422524 : post();
194 115422524 : }
195 :
196 : /* Use to set the pointer to NULL. */
197 : void clear() {
198 : pre();
199 : value = NULL;
200 : }
201 :
202 : /* Use this if the automatic coercion to T* isn't working. */
203 220454112 : T *get() const { return value; }
204 :
205 : /*
206 : * Use these if you want to change the value without invoking the barrier.
207 : * Obviously this is dangerous unless you know the barrier is not needed.
208 : */
209 : T **unsafeGet() { return &value; }
210 5133638 : void unsafeSet(T *v) { value = v; }
211 :
212 293618 : Unioned *unsafeGetUnioned() { return &other; }
213 :
214 144390164 : HeapPtr<T, Unioned> &operator=(T *v) {
215 144390164 : pre();
216 144390164 : JS_ASSERT(!IsPoisonedPtr<T>(v));
217 144390164 : value = v;
218 144390164 : post();
219 144390164 : return *this;
220 : }
221 :
222 16739426 : HeapPtr<T, Unioned> &operator=(const HeapPtr<T> &v) {
223 16739426 : pre();
224 16739426 : JS_ASSERT(!IsPoisonedPtr<T>(v.value));
225 16739426 : value = v.value;
226 16739426 : post();
227 16739426 : return *this;
228 : }
229 :
230 89182 : T &operator*() const { return *value; }
231 735750663 : T *operator->() const { return value; }
232 :
233 -1 : operator T*() const { return value; }
234 :
235 : /*
236 : * This coerces to MarkablePtr<U> as long as T can coerce to U. See the
237 : * comment for MarkablePtr above.
238 : */
239 : template<class U>
240 : operator MarkablePtr<U>() const { return MarkablePtr<U>(value); }
241 :
242 : private:
243 188349444 : void pre() { T::writeBarrierPre(value); }
244 365280689 : void post() { T::writeBarrierPost(value, (void *)&value); }
245 :
246 : /* Make this friend so it can access pre() and post(). */
247 : template<class T1, class T2>
248 : friend inline void
249 : BarrieredSetPair(JSCompartment *comp,
250 : HeapPtr<T1> &v1, T1 *val1,
251 : HeapPtr<T2> &v2, T2 *val2);
252 : };
253 :
254 : /*
255 : * This is a hack for RegExpStatics::updateFromMatch. It allows us to do two
256 : * barriers with only one branch to check if we're in an incremental GC.
257 : */
258 : template<class T1, class T2>
259 : static inline void
260 2566819 : BarrieredSetPair(JSCompartment *comp,
261 : HeapPtr<T1> &v1, T1 *val1,
262 : HeapPtr<T2> &v2, T2 *val2)
263 : {
264 2566819 : if (T1::needWriteBarrierPre(comp)) {
265 0 : v1.pre();
266 0 : v2.pre();
267 : }
268 2566819 : v1.unsafeSet(val1);
269 2566819 : v2.unsafeSet(val2);
270 2566819 : v1.post();
271 2566819 : v2.post();
272 2566819 : }
273 :
274 : struct Shape;
275 : class BaseShape;
276 : namespace types { struct TypeObject; }
277 :
278 : typedef HeapPtr<JSObject> HeapPtrObject;
279 : typedef HeapPtr<JSFunction> HeapPtrFunction;
280 : typedef HeapPtr<JSString> HeapPtrString;
281 : typedef HeapPtr<JSScript> HeapPtrScript;
282 : typedef HeapPtr<Shape> HeapPtrShape;
283 : typedef HeapPtr<BaseShape> HeapPtrBaseShape;
284 : typedef HeapPtr<types::TypeObject> HeapPtrTypeObject;
285 : typedef HeapPtr<JSXML> HeapPtrXML;
286 :
287 : /* Useful for hashtables with a HeapPtr as key. */
288 : template<class T>
289 : struct HeapPtrHasher
290 : {
291 : typedef HeapPtr<T> Key;
292 : typedef T *Lookup;
293 :
294 20764 : static HashNumber hash(Lookup obj) { return DefaultHasher<T *>::hash(obj); }
295 5815 : static bool match(const Key &k, Lookup l) { return k.get() == l; }
296 : };
297 :
298 : /* Specialized hashing policy for HeapPtrs. */
299 : template <class T>
300 : struct DefaultHasher< HeapPtr<T> >: HeapPtrHasher<T> { };
301 :
302 : class EncapsulatedValue
303 : {
304 : protected:
305 : Value value;
306 :
307 : /*
308 : * Ensure that EncapsulatedValue is not constructable, except by our
309 : * implementations.
310 : */
311 : EncapsulatedValue() MOZ_DELETE;
312 : EncapsulatedValue(const EncapsulatedValue &v) MOZ_DELETE;
313 : EncapsulatedValue &operator=(const Value &v) MOZ_DELETE;
314 : EncapsulatedValue &operator=(const EncapsulatedValue &v) MOZ_DELETE;
315 :
316 9551495 : EncapsulatedValue(const Value &v) : value(v) {}
317 11386322 : ~EncapsulatedValue() {}
318 :
319 : public:
320 13500 : const Value &get() const { return value; }
321 26305678 : Value *unsafeGet() { return &value; }
322 689944661 : operator const Value &() const { return value; }
323 :
324 5446574 : bool isUndefined() const { return value.isUndefined(); }
325 2369115 : bool isNull() const { return value.isNull(); }
326 2368980 : bool isBoolean() const { return value.isBoolean(); }
327 0 : bool isTrue() const { return value.isTrue(); }
328 108318 : bool isFalse() const { return value.isFalse(); }
329 2368755 : bool isNumber() const { return value.isNumber(); }
330 25428 : bool isInt32() const { return value.isInt32(); }
331 24895 : bool isDouble() const { return value.isDouble(); }
332 2401954 : bool isString() const { return value.isString(); }
333 1719 : bool isObject() const { return value.isObject(); }
334 8072298 : bool isMagic(JSWhyMagic why) const { return value.isMagic(why); }
335 : bool isGCThing() const { return value.isGCThing(); }
336 170556 : bool isMarkable() const { return value.isMarkable(); }
337 :
338 : bool toBoolean() const { return value.toBoolean(); }
339 : double toNumber() const { return value.toNumber(); }
340 533 : int32_t toInt32() const { return value.toInt32(); }
341 975 : double toDouble() const { return value.toDouble(); }
342 30688 : JSString *toString() const { return value.toString(); }
343 : JSObject &toObject() const { return value.toObject(); }
344 : JSObject *toObjectOrNull() const { return value.toObjectOrNull(); }
345 83405 : void *toGCThing() const { return value.toGCThing(); }
346 :
347 0 : JSGCTraceKind gcKind() const { return value.gcKind(); }
348 :
349 4733145 : uint64_t asRawBits() const { return value.asRawBits(); }
350 :
351 : #ifdef DEBUG
352 : JSWhyMagic whyMagic() const { return value.whyMagic(); }
353 : #endif
354 :
355 : static inline void writeBarrierPre(const Value &v);
356 : static inline void writeBarrierPre(JSCompartment *comp, const Value &v);
357 :
358 : protected:
359 : inline void pre();
360 : inline void pre(JSCompartment *comp);
361 : };
362 :
363 : class HeapValue : public EncapsulatedValue
364 : {
365 : public:
366 : explicit inline HeapValue();
367 : explicit inline HeapValue(const Value &v);
368 : explicit inline HeapValue(const HeapValue &v);
369 : inline ~HeapValue();
370 :
371 : inline void init(const Value &v);
372 : inline void init(JSCompartment *comp, const Value &v);
373 :
374 : inline HeapValue &operator=(const Value &v);
375 : inline HeapValue &operator=(const HeapValue &v);
376 :
377 : /*
378 : * This is a faster version of operator=. Normally, operator= has to
379 : * determine the compartment of the value before it can decide whether to do
380 : * the barrier. If you already know the compartment, it's faster to pass it
381 : * in.
382 : */
383 : inline void set(JSCompartment *comp, const Value &v);
384 :
385 : static inline void writeBarrierPost(const Value &v, void *addr);
386 : static inline void writeBarrierPost(JSCompartment *comp, const Value &v, void *addr);
387 :
388 : private:
389 : inline void post();
390 : inline void post(JSCompartment *comp);
391 : };
392 :
393 : class HeapSlot : public EncapsulatedValue
394 : {
395 : /*
396 : * Operator= is not valid for HeapSlot because is must take the object and
397 : * slot offset to provide to the post/generational barrier.
398 : */
399 : inline HeapSlot &operator=(const Value &v) MOZ_DELETE;
400 : inline HeapSlot &operator=(const HeapValue &v) MOZ_DELETE;
401 : inline HeapSlot &operator=(const HeapSlot &v) MOZ_DELETE;
402 :
403 : public:
404 : explicit inline HeapSlot() MOZ_DELETE;
405 : explicit inline HeapSlot(JSObject *obj, uint32_t slot, const Value &v);
406 : explicit inline HeapSlot(JSObject *obj, uint32_t slot, const HeapSlot &v);
407 : inline ~HeapSlot();
408 :
409 : inline void init(JSObject *owner, uint32_t slot, const Value &v);
410 : inline void init(JSCompartment *comp, JSObject *owner, uint32_t slot, const Value &v);
411 :
412 : inline void set(JSObject *owner, uint32_t slot, const Value &v);
413 : inline void set(JSCompartment *comp, JSObject *owner, uint32_t slot, const Value &v);
414 :
415 : static inline void writeBarrierPost(JSObject *obj, uint32_t slot);
416 : static inline void writeBarrierPost(JSCompartment *comp, JSObject *obj, uint32_t slotno);
417 :
418 : private:
419 : inline void post(JSObject *owner, uint32_t slot);
420 : inline void post(JSCompartment *comp, JSObject *owner, uint32_t slot);
421 : };
422 :
423 : static inline const Value *
424 652151 : Valueify(const EncapsulatedValue *array)
425 : {
426 : JS_STATIC_ASSERT(sizeof(HeapValue) == sizeof(Value));
427 : JS_STATIC_ASSERT(sizeof(HeapSlot) == sizeof(Value));
428 652151 : return (const Value *)array;
429 : }
430 :
431 : class HeapSlotArray
432 : {
433 : HeapSlot *array;
434 :
435 : public:
436 2035332 : HeapSlotArray(HeapSlot *array) : array(array) {}
437 :
438 647000 : operator const Value *() const { return Valueify(array); }
439 1347373 : operator HeapSlot *() const { return array; }
440 :
441 : HeapSlotArray operator +(int offset) const { return HeapSlotArray(array + offset); }
442 40959 : HeapSlotArray operator +(uint32_t offset) const { return HeapSlotArray(array + offset); }
443 : };
444 :
445 : class HeapId
446 : {
447 : jsid value;
448 :
449 : public:
450 25698 : explicit HeapId() : value(JSID_VOID) {}
451 : explicit inline HeapId(jsid id);
452 :
453 : inline ~HeapId();
454 :
455 : inline void init(jsid id);
456 :
457 : inline HeapId &operator=(jsid id);
458 : inline HeapId &operator=(const HeapId &v);
459 :
460 1496677610 : bool operator==(jsid id) const { return value == id; }
461 2836 : bool operator!=(jsid id) const { return value != id; }
462 :
463 28605068 : jsid get() const { return value; }
464 4625704 : jsid *unsafeGet() { return &value; }
465 -1 : operator jsid() const { return value; }
466 :
467 : private:
468 : inline void pre();
469 : inline void post();
470 :
471 : HeapId(const HeapId &v);
472 : };
473 :
474 : /*
475 : * Incremental GC requires that weak pointers have read barriers. This is mostly
476 : * an issue for empty shapes stored in JSCompartment. The problem happens when,
477 : * during an incremental GC, some JS code stores one of the compartment's empty
478 : * shapes into an object already marked black. Normally, this would not be a
479 : * problem, because the empty shape would have been part of the initial snapshot
480 : * when the GC started. However, since this is a weak pointer, it isn't. So we
481 : * may collect the empty shape even though a live object points to it. To fix
482 : * this, we mark these empty shapes black whenever they get read out.
483 : */
484 : template<class T>
485 : class ReadBarriered
486 : {
487 : T *value;
488 :
489 : public:
490 65155696 : ReadBarriered() : value(NULL) {}
491 11543782 : ReadBarriered(T *value) : value(value) {}
492 :
493 226479329 : T *get() const {
494 226479329 : if (!value)
495 0 : return NULL;
496 226479329 : T::readBarrier(value);
497 226479329 : return value;
498 : }
499 :
500 135147365 : operator T*() const { return get(); }
501 :
502 : T &operator*() const { return *get(); }
503 91331964 : T *operator->() const { return get(); }
504 :
505 : T *unsafeGet() { return value; }
506 :
507 : void set(T *v) { value = v; }
508 :
509 : operator bool() { return !!value; }
510 :
511 : template<class U>
512 : operator MarkablePtr<U>() const { return MarkablePtr<U>(value); }
513 : };
514 :
515 : class ReadBarrieredValue
516 : {
517 : Value value;
518 :
519 : public:
520 1670406 : ReadBarrieredValue() : value(UndefinedValue()) {}
521 52609 : ReadBarrieredValue(const Value &value) : value(value) {}
522 :
523 : inline const Value &get() const;
524 : inline operator const Value &() const;
525 :
526 : inline JSObject &toObject() const;
527 : };
528 :
529 : }
530 :
531 : #endif /* jsgc_barrier_h___ */
|