摘要: Twenty years after the initial proposal [4], hardware transactional memory is becoming commonplace. All commercial versions to date—and all that are likely to emerge in the near future—are best effort implementations: a transaction may abort and retry not only because of an actual data conflict with some concurrent transaction, but also because of limitations on the instructions that can be executed, the time that can be consumed, or the size or associativity of the space used to buffer speculative reads and writes. As programmers begin to write programs with transactions—particularly large transactions—scaling problems will be inevitable, and we can expect growing demand for programming techniques that minimize transaction conflicts and hardware overflow. This abstract introduces one such technique. It exploits the common pattern in which (1) a transaction spends significant time “figuring out what it wants to update” before actually making the updates, and (2) the decision as to what to update can be checked for correctness more easily than it could be computed in the first place. A transaction that satisfies these properties can then be partitioned into a planning phase and an update phase: the planning phase determines what to update; the update phase double-checks the correctness of the plan and performs it. The planning phase can often be performed in ordinary code; the update phase remains a true transaction. Information is passed between the two in the form of a validator object that encapsulates a description of the desired update (the plan) and whatever information is needed to confirm its continued correctness.Partitioned …