Heaps and Stacks
Stack allocated value types are great because their life are directly related to their scope's life, but if your value type is the child of a class, a reference is all it takes for it to outlive its scope. This situation is common in @escaping closures, and this value type will lose its stack allocation properties in order to be fully heap allocated alongside the reference type. In a way, you could even say that this kind of value type is a reference type itself, as living in the heap means that several objects can point to it - even though it still possesses value semantics.
Memory allocation
Stack Allocation
Heap Allocation
Reference Counting in Value Types
A fully stack allocated value type will not need reference counting, but a value type with inner references will unfortunately inherit this ability.
Stack cache mode
The computer memory used by applications in IOS is not uniformly allocated space. The space used for running code is divided into three sections: "text segment", "stack segment" and "heap segment".
Memory allocation
Struct, Bool, Int - Stack
Class, function, closure - Heap

Stack Allocation

is a simple and fast way to allocate/deallocate memory that involves the stack.

Each "scope" in your app (like the inner contents of a method) will provide the amount of memory it needs to run, move the stack pointer in memory by such amount and run - adding data to the empty memory addresses it now posesses. After the scope is lost, the stack pointer will simply decrease by the same amount - safely deallocating all the scope's data. The cost of allocating/deallocating stack memory is literally the cost of assigning an integer.

In Stack Allocation, the data gathered by the scope means everything attributed to it - like method arguments, return values, but more importantly: value types. As long as the size of the value type is known during compile time and it doesn't contain / is not contained, recursively, by a reference type, it will not need reference counting, and its life will be static - equal to the life of its scope. It will be allocated completely on the stack, and when the scope is deallocated, so is the value type. The absence of reference counting overhead and the presence of stack allocation is a considerable performance boost.

If the contents of your value type are other stack allocated, static size value types, then your value type is also static sized. That means your value type will also leverage full stack allocation as well as having a performance boost in copying operations.
Assigning a property to most value types will indeed create a full copy of the object. However, this copy-on-assignment behaviour for fully stack allocated value types is so fast and so cheap that Apple claims it runs in constant time:

Heap Allocation

The stack is not meant to be used with objects that change in size, and the concept of pointers / dynamic lifetime means that an object's life could have nothing to do with its scope - after all, it's possible to have an object existing in memory even if there's nothing going on. In this case, it's meant to be used for dynamically-allocated, user-managed memory.

When the process requests a certain amount of memory, the heap will search for a memory address that fulfills this request and return it to the process. When the memory is not being used anymore, the process must tell the heap to free that section of memory.

In iOS, "not being used anymore" works in the shape of reference counting, and luckily the existence of ARC means that most things are automatically handled for you unless you have to deal with the RawPointer family.

Heap Allocation is slower than Stack Allocation not just because of the more complex data structure - it also requires thread safety. Each thread has its own stack, but the heap is shared with everybody, demanding synchronization. However, it allows reference types and things like dynamic size arrays to exist

It would be nice if memory management was as binary as saying that value types go to the stack and reference types go to the heap, but in reality, the life and performance of a value type is heavily defined by its contents.

Heap Allocated Value Types

If the size of your value type cannot be determined during compile time (because of a protocol/generic requirement), or if your value type recursively contains / is contained by a reference type (remember that closures are also reference types), then it will require heap allocation. This can range from being not being an issue at all to making your structperform exponentially worse than it would if it was a class instead.

Stack allocated value types are great because their life are directly related to their scope's life, but if your value type is the child of a class, a reference is all it takes for it to outlive its scope. This situation is common in @escaping closures, and this value type will lose its stack allocation properties in order to be fully heap allocated alongside the reference type. In a way, you could even say that this kind of value type is a reference type itself, as living in the heap means that several objects can point to it - even though it still possesses value semantics.

If your value type is instead the parent of a heap allocated class, then it will not be heap allocated itself, but it will inherit reference counting overhead in order to be able to keep it's inner references alive. This can cause a considerable drop in performance depending on the complexity of the value type.

In the Standard Library, examples of value types with child references are String, Array, Dictionary and Set. These value types contain internal reference types that manage the storage of elements in the heap, allowing them to increase/decrease in size as needed.

Since heap operations are more expensive than stack ones, copying heap allocated value types is not a constant operation like in stack allocated ones. To prevent this from hurting performance, the Standard Library's extensible data structures are copy-on-write.

With this capability, merely assigning properties will not copy the value type - instead, it will create a reference just like if it was a regular reference type. The actual copying will only happen when it's really necessary.
//Copy on Assignment
let emptyStruct = EmptyStruct() //address A
let copy = emptyStruct //address B

//Copy on Write
let array = [1,2,3] //address C
var notACopy = array //still address C
notACopy = [4,5,6] //now address D
Problematic Reference Counting in Value Types With Inner References
A fully stack allocated value type will not need reference counting, but a value type with inner references will unfortunately inherit this ability.

Since all reference types require reference counting, increasing the amount of properties of a class of classes will not change the runtime of this algorithm, as merely increasing the reference count of the parent reference will be enough to keep it's inner references alive.

However, value types do not naturally have a reference count. If your value type contains inner references, copying it will require increasing the reference count of it's children instead - not the first, not the second, but literally every single one of them.
        let classOfClasses = ClassOfClasses()
        let reference = classOfClasses
        let reference2 = classOfClasses
        let reference3 = classOfClasses

        CFGetRetainCount(classOfClasses) // 5
        CFGetRetainCount(classOfClasses.emptyClass) // 2
        CFGetRetainCount(classOfClasses.emptyClass2) // 2
        CFGetRetainCount(classOfClasses.emptyClass3) // 2

        let structOfClasses = StructOfClasses()
        let copy = structOfClasses
        let copy2 = structOfClasses
        let copy3 = structOfClasses

        // CFGetRetainCount(structOfClasses) // Doesn't compile, structs themselves don't have a reference count.
        CFGetRetainCount(structOfClasses.emptyClass) // 5
        CFGetRetainCount(structOfClasses.emptyClass2) // 5
        CFGetRetainCount(structOfClasses.emptyClass3) // 5
    
Original source:
swiftrocks.com
Stack cache mode

The computer memory used by applications in IOS is not uniformly allocated space. The space used for running code is divided into three sections: "text segment", "stack segment" and "heap segment".

Text segment:

It is the memory segment that exists in the application code when the application is running. It has been determined before running (determined at compile time). It is usually read-only. The instructions in the code area include the operation code and the object to be operated (or object address reference). The instructions in the code area are executed in sequence according to the programming flow. Each instruction, each single function, procedure, method and execution code are stored in this memory segment until the application program exits. It is rarely involved in general use.

Stack:

When we create a value type, such as a structure, the system stores it in a memory area called a stack, which is directly managed and optimized by the CPU. When a function declares a variable, the variable will be stored in the stack. When the function is called, the stack will release the variable automatically. Therefore, the stack is very easy to manage and effective, because it is directly controlled by the CPU, the speed is very fast.

Heap:

When we create a reference type, such as a class, the system stores the class instance in a memory area called the heap. The system uses the heap to store data referenced by other objects.

The heap is a large memory pool from which the system can request and dynamically allocate memory blocks. The heap does not automatically release objects like a stack, and extra work is required. This makes creating and deleting data in the heap slower than the stack.

The stack uses the first level cache. They are usually in the storage space when they are called and are released immediately after the call.

The heap is stored in the secondary cache, and its life cycle is determined by the garbage collection algorithm of the virtual machine (not once orphaned objects can be recycled). So the speed of calling these objects is relatively low.

A pointer in the stack is just an integer variable that holds the data of a specific memory address in the heap. In short, the operating system uses the pointer value in the stack segment to access the objects in the heap segment. If there is no pointer to the stack object, the object in the heap cannot be accessed. This is also the cause of memory leaks.

You can create data objects in both stack and heap segments of IOS.
Stack object has two advantages, one is to create fast, the other is simple management, it has a strict life cycle. The disadvantage of the stack object is that it is inflexible. As long as it is created, it is always owned by the function that created it. Unlike a heap object with multiple owners, multiple owners are equivalent to reference counting. Only the heap object is managed by the "reference count" method.
Stack data structure differences

Heap (data structure): the heap can be considered as a tree, such as heap sorting.

Stack (data structure): a data structure that comes in and out later.

What is the difference between heap and stack? The main differences are as follows:

1. Different management methods;

Management mode: for the stack, it is automatically managed by the compiler, without manual control; for the heap, the release is controlled by the programmer, which is easy to generate memory leak.

2. The space size is different;

Space size: a stack is a small but fast memory area. The memory allocation on the stack follows the principle of last in first out. The push and pop operations are implemented by moving the tail pointer of the stack. Our program is composed of methods, and the CPU is responsible for scheduling and executing these methods. When our program runs to a certain method, we need to open up space on the stack for the memory needed by the method. At this time, we move the tail pointer of the stack to the bottom of the stack. After the method is executed, the space needs to be released. At this time, the tail pointer of the stack will be moved to the top of the stack, which completes the memory allocation on the stack. As long as the remaining space of the stack is greater than the space created by the stack object, the operating system will provide this memory space for the program. Otherwise, an exception will be reported to prompt the stack overflow.

The heap is another area of memory that is much larger than the stack, but runs slower than the stack. Heap can dynamically allocate memory at run time to make up for the lack of memory allocation on the stack. Generally speaking, heap memory can reach 4G in a 32-bit system. From this point of view, heap memory is almost unlimited.

The operating system uses linked list to manage the memory heap segment. The operating system has a linked list to record the address of the free memory. When receiving the application from the program, it will traverse the list to find the first heap node whose space is larger than the requested heap node, and then delete the node from the free node list and allocate the space of the node to the program. IOS uses a technology called arc (automatic reference counting). In a multithreaded environment, multiple threads will share the memory on the heap. In order to ensure thread safety, we have to lock the heap. However, locking is very expensive. The data security you get on the heap is actually obtained at the cost of performance.

3. Whether fragments can be produced is different;

Fragmentation: for the heap, frequent new / delete will inevitably cause the discontinuity of memory space, resulting in a large number of fragments, which will reduce the efficiency of the program. For the stack, there is no such problem, because the stack is a queue of first in and last out. They are so one-to-one corresponding that it is impossible for a memory block to pop out from the middle of the stack. Before the stack pops up, the contents of the backward stack above it have been ejected.

4. The growth direction is different;

Growth direction: for heap, the growth direction is upward, that is, toward the direction of increasing memory address; for stack, its growth direction is downward, which is toward the direction of decreasing memory address.

5. The distribution mode is different;

Allocation method: the heap is dynamically allocated and there is no statically allocated heap. There are two ways of stack allocation: static allocation and dynamic allocation. Static allocation is done by the compiler, such as the allocation of local variables. Dynamic allocation is allocated by the alloca function, but the dynamic allocation of stack is different from that of heap. Its dynamic allocation is released by the compiler and does not need to be implemented manually.
Implementation. Data structure
Stack

A stack gives you a LIFO or last-in first-out order. The element you pushed last is the first one to come off with the next pop. (A very similar data structure, the queue, is FIFO or first-in first-out.)

Notice that a push puts the new element at the end of the array, not the beginning. Inserting at the beginning of an array is expensive, an O(n) operation, because it requires all existing array elements to be shifted in memory. Adding at the end is O(1); it always takes the same amount of time, regardless of the size of the array.

Fun fact about stacks: Each time you call a function or a method, the CPU places the return address on a stack. When the function ends, the CPU uses that return address to jump back to the caller. That's why if you call too many functions - for example in a recursive function that never ends - you get a so-called "stack overflow" as the CPU stack has run out of space.

public struct Stack<T> {

  fileprivate var array = [T]()

  public var isEmpty: Bool {
    return array.isEmpty
  }

  public var count: Int {
    return array.count
  }

  public mutating func push(_ element: T) {
    array.append(element)
  }

  public mutating func pop() -> T? {
    return array.popLast()
  }

  public var top: T? {
    return array.last
  }
}
Heap

There are two kinds of heaps: a max-heap and a min-heap which are different by the order in which they store the tree nodes.

In a max-heap, parent nodes have a greater value than each of their children. In a min-heap, every parent node has a smaller value than its child nodes. This is called the "heap property", and it is true for every single node in the tree.

As a result of this heap property, a max-heap always stores its largest item at the root of the tree. For a min-heap, the root is always the smallest item in the tree. The heap property is useful because heaps are often used as a priority queue to access the "most important" element quickly.

Note: The root of the heap has the maximum or minimum element, but the sort order of other elements are not predictable. For example, the maximum element is always at index 0 in a max-heap, but the minimum element isn't necessarily the last one. -- the only guarantee you have is that it is one of the leaf nodes, but not which one.

GitHub
    Original source:
    github.com