Skip to content

Commit fad5471

Browse files
authored
[EASTL 3.17.06] (#412)
Added more information for read_depends in the atomic.h doc Added comment about 128 bit atomic intrinsics on msvc Added some extra 128 bit atomic load tests set_decomposition/set_difference_2 implementation Allow eastl::atomic_load_cond to use non default constructible types Adding static_assert tests for eastl::size and eastl::ssize eastl::atomic<T> added support for constant initialization fixed msvc warning in variant fixed clang warning for single-line if statements indentation Adding Paul Pedriana's documentation for the motivation of EASTL submitted to the ISO committee. Fix off-by-one error in Timsort. If the final run is of exactly length 2, under some conditions, we end up trying to sort off the end of the input array. Added missing header to bonus/adaptors.h Fixed usage of wrong template parameter for alignment of atomic type pun cast
1 parent dda5082 commit fad5471

26 files changed

+4903
-4530
lines changed

README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,11 @@ Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on compiling and testi
4444

4545
EASTL was created by Paul Pedriana and he maintained the project for roughly 10 years.
4646

47-
Roberto Parolin is the current EASTL owner and primary maintainer within EA and is responsible for the open source repository.
48-
Max Winkler is the secondary maintainer for EASTL within EA and on the open source repository.
47+
EASTL was subsequently maintained by Roberto Parolin for more than 8 years.
48+
He was the driver and proponent for getting EASTL opensourced.
49+
Rob was a mentor to all members of the team and taught us everything we ever wanted to know about C++ spookyness.
50+
51+
Max Winkler is the current EASTL owner and primary maintainer within EA and is responsible for the open source repository.
4952

5053
Significant EASTL contributions were made by (in alphabetical order):
5154

doc/EASTL-n2271.pdf

337 KB
Binary file not shown.

include/EASTL/algorithm.h

Lines changed: 491 additions & 366 deletions
Large diffs are not rendered by default.

include/EASTL/atomic.h

Lines changed: 56 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -243,36 +243,27 @@
243243
// Deviations from the standard. This does not include new features added:
244244
//
245245
// 1.
246-
// Description: Atomic class constructors are not and will not be constexpr.
247-
// Reasoning : We assert in the constructor that the this pointer is properly aligned.
248-
// There are no other constexpr functions that can be called in a constexpr
249-
// context. The only use for constexpr here is const-init time or ensuring
250-
// that the object's value is placed in the executable at compile-time instead
251-
// of having to call the ctor at static-init time. If you are using constexpr
252-
// to solve static-init order fiasco, there are other solutions for that.
253-
//
254-
// 2.
255246
// Description: Atomics are always lock free
256247
// Reasoning : We don't want people to fall into performance traps where implicit locking
257248
// is done. If your user defined type is large enough to not support atomic
258249
// instructions then your user code should do the locking.
259250
//
260-
// 3.
251+
// 2.
261252
// Description: Atomic objects can not be volatile
262253
// Reasoning : Volatile objects do not make sense in the context of eastl::atomic<T>.
263254
// Use the given memory orders to get the ordering you need.
264255
// Atomic objects have to become visible on the bus. See below for details.
265256
//
266-
// 4.
257+
// 3.
267258
// Description: Consume memory order is not supported
268259
// Reasoning : See below for the reasoning.
269260
//
270-
// 5.
261+
// 4.
271262
// Description: ATOMIC_INIT() macros and the ATOMIC_LOCK_FREE macros are not implemented
272263
// Reasoning : Use the is_lock_free() method instead of the macros.
273264
// ATOMIC_INIT() macros aren't needed since the default constructor value initializes.
274265
//
275-
// 6.
266+
// 5.
276267
// Description: compare_exchange failure memory order cannot be stronger than success memory order
277268
// Reasoning : Besides the argument that it ideologically does not make sense that a failure
278269
// of the atomic operation shouldn't have a stricter ordering guarantee than the
@@ -284,7 +275,7 @@
284275
// that versions of compilers that say they support C++17 do not properly adhere to this
285276
// new requirement in their intrinsics. Thus we will not support this.
286277
//
287-
// 7.
278+
// 6.
288279
// Description: All memory orders are distinct types instead of enum values
289280
// Reasoning : This will not affect how the API is used in user code.
290281
// It allows us to statically assert on invalid memory orders since they are compile-time types
@@ -1384,13 +1375,62 @@
13841375
// The read_depends operation can be used on loads from only an eastl::atomic<T*> type. The return pointer of the load must and can only be used to then further load values. And that is it.
13851376
// If you are unsure, upgrade this load to an acquire operation.
13861377
//
1387-
// MyStruct* ptr = gAtomicPtr.load(read_depends);
1378+
// MyStruct* ptr = gAtomicPtr.load(memory_order_read_depends);
13881379
// int a = ptr->a;
13891380
// int b = ptr->b;
13901381
// return a + b;
13911382
//
13921383
// The loads from ptr after the gAtomicPtr load ensure that the correct values of a and b are observed. This pairs with a Release operation on the writer side by releasing gAtomicPtr.
13931384
//
1385+
//
1386+
// As said above the returned pointer from a .load(memory_order_read_depends) can only be used to then further load values.
1387+
// Dereferencing(*) and Arrow Dereferencing(->) are valid operations on return values from .load(memory_order_read_depends).
1388+
//
1389+
// MyStruct* ptr = gAtomicPtr.load(memory_order_read_depends);
1390+
// int a = ptr->a; - VALID
1391+
// int a = *ptr; - VALID
1392+
//
1393+
// Since dereferencing is just indexing via some offset from some base address, this also means addition and subtraction of constants is ok.
1394+
//
1395+
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
1396+
// int a = *(ptr + 1) - VALID
1397+
// int a = *(ptr - 1) - VALID
1398+
//
1399+
// Casts also work correctly since casting is just offsetting a pointer depending on the inheritance hierarchy or if using intrusive containers.
1400+
//
1401+
// ReadDependsIntrusive** intrusivePtr = gAtomicPtr.load(memory_order_read_depends);
1402+
// ReadDependsIntrusive* ptr = ((ReadDependsIntrusive*)(((char*)intrusivePtr) - offsetof(ReadDependsIntrusive, next)));
1403+
//
1404+
// Base* basePtr = gAtomicPtr.load(memory_order_read_depends);
1405+
// Dervied* derivedPtr = static_cast<Derived*>(basePtr);
1406+
//
1407+
// Both of the above castings from the result of the load are valid for this memory order.
1408+
//
1409+
// You can reinterpret_cast the returned pointer value to a uintptr_t to set bits, clear bits, or xor bits but the pointer must be casted back before doing anything else.
1410+
//
1411+
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
1412+
// ptr = reinterpret_cast<int*>(reinterpret_cast<uintptr_t>(ptr) & ~3);
1413+
//
1414+
// Do not use any equality or relational operator (==, !=, >, <, >=, <=) results in the computation of offsets before dereferencing.
1415+
// As we learned above in the Control Dependencies section, CPUs will not order Load-Load Control Dependencies. Relational and equality operators are often compiled using branches.
1416+
// It doesn't have to be compiled to branched, condition instructions could be used. Or some architectures provide comparison instructions such as set less than which do not need
1417+
// branches when using the result of the relational operator in arithmetic statements. Then again short circuiting may need to introduct branches since C++ guarantees the
1418+
// rest of the expression must not be evaluated.
1419+
// The following odd code is forbidden.
1420+
//
1421+
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
1422+
// int* ptr2 = ptr + (ptr >= 0);
1423+
// int a = *ptr2;
1424+
//
1425+
// Only equality comparisons against nullptr are allowed. This is becase the compiler cannot assume that the address of the loaded value is some known address and substitute our loaded value.
1426+
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
1427+
// if (ptr == nullptr); - VALID
1428+
// if (ptr != nullptr); - VALID
1429+
//
1430+
// Thus the above sentence that states:
1431+
// The return pointer of the load must and can only be used to then further load values. And that is it.
1432+
// must be respected by the programmer. This memory order is an optimization added for efficient read heavy pointer swapping data structures. IF you are unsure, use memory_order_acquire.
1433+
//
13941434
// ******** Relaxed && eastl::atomic<T> guarantees ********
13951435
//
13961436
// We saw various ways that compiler barriers do not help us and that we need something more granular to make sure accesses are not mangled by the compiler to be considered atomic.
@@ -1586,7 +1626,7 @@
15861626
// ----------------------------------------------------------------------------------------
15871627
//
15881628
// In this example it is entirely possible that we observe r0 = 1 && r1 = 0 even though we have source code causality and sequentially consistent operations.
1589-
// Observability is tied to the atomic object on which the operation was performed and the thread fence doesn't synchronize-with the fetch_add because there is no
1629+
// Observability is tied to the atomic object on which the operation was performed and the thread fence doesn't synchronize-with the fetch_add because
15901630
// there is no load above the fence that reads the value from the fetch_add.
15911631
//
15921632
// ******** Sequential Consistency Semantics ********

include/EASTL/bonus/adaptors.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
#include <EASTL/internal/config.h>
1414
#include <EASTL/internal/move_help.h>
1515
#include <EASTL/type_traits.h>
16+
#include <EASTL/iterator.h>
1617

1718
#if defined(EA_PRAGMA_ONCE_SUPPORTED)
1819
#pragma once // Some compilers (e.g. VC++) benefit significantly from using this. We've measured 3-4% build speed improvements in apps as a result.

include/EASTL/fixed_vector.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -356,7 +356,7 @@ namespace eastl
356356
get_allocator() = x.get_allocator(); // The primary effect of this is to copy the overflow allocator.
357357
#endif
358358

359-
base_type::template DoAssign<move_iterator<iterator>, true>(make_move_iterator(x.begin()), make_move_iterator(x.end()), false_type()); // Shorter route.
359+
base_type::template DoAssign<move_iterator<iterator>, true>(eastl::make_move_iterator(x.begin()), eastl::make_move_iterator(x.end()), false_type()); // Shorter route.
360360
}
361361
return *this;
362362
}

include/EASTL/internal/atomic/atomic.h

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -127,12 +127,12 @@ namespace internal
127127
\
128128
public: /* ctors */ \
129129
\
130-
atomic(type desired) EA_NOEXCEPT \
130+
EA_CONSTEXPR atomic(type desired) EA_NOEXCEPT \
131131
: Base{ desired } \
132132
{ \
133133
} \
134134
\
135-
atomic() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<type>) = default; \
135+
EA_CONSTEXPR atomic() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<type>) = default; \
136136
\
137137
public: \
138138
\
@@ -148,7 +148,6 @@ namespace internal
148148
}
149149

150150

151-
152151
#define EASTL_ATOMIC_USING_ATOMIC_BASE(type) \
153152
public: \
154153
\

include/EASTL/internal/atomic/atomic_base_width.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -198,12 +198,12 @@ namespace internal
198198
\
199199
public: /* ctors */ \
200200
\
201-
atomic_base_width(T desired) EA_NOEXCEPT \
201+
EA_CONSTEXPR atomic_base_width(T desired) EA_NOEXCEPT \
202202
: Base{ desired } \
203203
{ \
204204
} \
205205
\
206-
atomic_base_width() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<T>) = default; \
206+
EA_CONSTEXPR atomic_base_width() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<T>) = default; \
207207
\
208208
atomic_base_width(const atomic_base_width&) EA_NOEXCEPT = delete; \
209209
\

include/EASTL/internal/atomic/atomic_casts.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ EASTL_FORCE_INLINE Pun AtomicTypePunCast(const T& fromType) EA_NOEXCEPT
126126
* aligned_storage ensures we can TypePun objects that aren't trivially default constructible
127127
* but still trivially copyable.
128128
*/
129-
typename eastl::aligned_storage<sizeof(Pun), alignof(T)>::type ret;
129+
typename eastl::aligned_storage<sizeof(Pun), alignof(Pun)>::type ret;
130130
memcpy(eastl::addressof(ret), eastl::addressof(fromType), sizeof(Pun));
131131
return reinterpret_cast<Pun&>(ret);
132132
}

include/EASTL/internal/atomic/atomic_flag.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,12 @@ class atomic_flag
2222
{
2323
public: /* ctors */
2424

25-
atomic_flag(bool desired)
25+
EA_CONSTEXPR atomic_flag(bool desired) EA_NOEXCEPT
2626
: mFlag{ desired }
2727
{
2828
}
2929

30-
atomic_flag() EA_NOEXCEPT
30+
EA_CONSTEXPR atomic_flag() EA_NOEXCEPT
3131
: mFlag{ false }
3232
{
3333
}

0 commit comments

Comments
 (0)