The Lumiera Developers have a distinct vision about the core of a modern NLE. This document outlines some of the basic technical concepts and highlights the fundamental design decisions. New Developers may find this Document useful to get an idea about how the different components work together.
Lumiera constitutes of a broad range of subsystems. These are roughly grouped into three layers plus some extra components. This structure is mostly kept in the source directory structures.
This three Layers are:
Interaction and presentation. User interfaces are implemented as plug-ins, and while most commonly one will see the GTK UI, it would be possible to construct a entirely different user interface, a CLI intereface, or even operate the application “headless”, script driven.
Keeps the Session, generates the rendering graphs for sequences, arranges what to do when and how.
Manages worker pools, schedules jobs, does the memory management for the heavy multimedia data. Loads and delegates to external libraries for media processing.
The extra components are:
The main program itself, basically acts only as loader to pull the rest up.
Vital components and common services which must be available for pulling up any part of the system.
A lot of (largely) stateless helper functionality used throughout the code.
The Lumiera team agreed on using GNU coding style with slight adaptations. Generally speaking, we strive to keep the code base consistent and stick to widely accepted guidelines and best practices. See our separate Coding Guidelines page. Function naming conventions and other details are also described in several RFCs.
The central location for all design and technical documentation is the Lumiera website you’re reading right now. Besides that, a summary and introduction for various components can be found in the file-level doxygen comments, while details are usually explained in the class and function level comments.
Currently, Lumiera is still in the design- and evolution phase. Documentation
is written as an ongoing effort.
There is an embedded JavaScript wiki (TiddlyWiki) within the source tree, mostly
used as design notebook, featuring day-to-day design sketches and notes, but also
quite some more persistent planning. Finished documentation text is constantly
moved over to the documentation section(s) of the Lumiera website.
→ access the development TiddlyWiki online here
We strive to use Test-Driven-Development: tests are to be written first, defining the specification of the entity being tested. Then things get implemented until they pass their tests. While increasing the initial development effort significantly, this approach is known to lead to clearly defined components and overall increases code quality. In practice, this approach might not be suitable at times, nevertheless we try to stick to it as far as possible and maintain fairly complete test coverage.
Lumiera is developed in a collaborative style. Releases are done “when ready”. We (the core developers) are always open for comments, new ideas and proposals. In fact, there is even a world-pushable “mob” Git repository on our server: minor contributions can be pushed there right away, we’ll notice any changes. It might be a good idea to discuss larger changes prior to jumping into the code though. Our Mailinglist is the main information hub. Users and developers — everyone is welcome.
Lumiera should be usable on any contemporary Linux (or similar *NIX) system. Porting to other platforms is OK for us, but requires SomeoneTM to volunteer. For development and dependency management, we try to stick close to what Debian/stable provides. Whenever we need special or more recent libraries, we care for a backport and publish a DEB package for use on Debian/Stable. We also run a series of Debian Repositories, and we try to build a Lumiera package for an array of Debian-like distributions (Ubuntu, Mint). Through these ongoing efforts, we try to keep up with the evolution of platform, compilers, language and libraries, and we maintain our ability to release, even while there are no public releases planned for the foreseeable future.
Generally speaking, the Application is comprised of several self contained subsystems, which may depend on each other. Dependencies between components are to be abstracted through interfaces. Based on the current configuration, the application framework brings up the necessary subsystems and finally conducts a clean shutdown. Beyond that, the application framework remains passive, yet provides vital services commonly used.
While the core components are linked into a coherent application and may utilise each other’s services directly based on C/C++ language facilities, several important parts of the applications are loaded as plug-ins, starting with the GUI.
The purpose of the user interface(s) is to act on the high-level data model contained within the Session, which belongs to the steam layer below. User interfaces are implemented as plug-ins and are pulled up on demand, they won’t contain any relevant persistent state beyond presentation.
As of 2018, the one and only interface under active development is the Lumiera GTK GUI, based on GTK-3 / gtkmm. The sources are in tree (directory src/stage) and it is integrated into the standard build and installation process. By default, running the lumiera executable will load and start this GUI as a Lumiera module from modules/gtk_gui.lum
tbw
tbw
tbw
tbw
tbw
tbw
tbw
tbw
tbw
tbw
tbw
tbw
tbw
Within Lumiera, »Player« is the name for a subsystem responsible for organising and tracking
ongoing playback and render processes. The player subsystem does not perform or even manage
any render operations, nor does it handle the outputs directly.
Yet it addresses some central concerns:
All playback and render processes are on equal footing, handled in a similar way.
The player cares for the necessary integration with the other subsystems
it consults the Output Management, retrieves the necessary informations from the Session and coordinates the forwarding of Vault-Layer calls.
The player translates continuous time values into discrete frame counts.
To perform this quantisation, the help of the session for building a TimeGrid for each output channel is required.
Client code accesses the player (subsystem) through the play-facade (lumiera::Play
).
The exposed service allows to set up an output connection for playback or rendering,
resulting in a play-controller object.
This controller frontend represents the presence of such an active output connection and incorporates a state machine supporting the usual things you’d expect to do with a player (Play, pause, FFwd, Rew, scrubbing, jumping, looping). This controller object is a copyable smart-handle — all instances act as if wired in parallel.
The play-controller frontend makes heavy use of time::Control
. This is a mediator
to accept and forward mutations on time values and time ranges, possibly involving
frame quantisation. After attaching it to a target time value, it accepts changes,
offsets and nudging, translates these into the appropriate target modifications
and notifies any attached change listeners.
Ongoing effort to calculate a stream of frames for playback or rendering.
The play process is an conceptual entity linking together several activities in the vault
and the render engine. It maintains a registration entry for the process to keep track of
associated entities, resources allocated and calls dispatched as a consequence. Besides
each play process is wired to at leas one play-controller acting as frontend interface
and information hub for the client code.
the player is in no way engaged in any of the actual calculation and management tasks necessary to make the stream of calculations actually happen. The play process code contained within the player subsystem is largely comprised of organisational concerns and not especially performance critical. |
the vault is responsible for dispatching the calculation stream and scheduling calculation jobs
the render engine has the ability to carry out individual frame calculations
the OutputSlot exposed by the output manager is responsible for accepting timed frame delivery
as mru cache, round robin reused
Lumiera has its own abstract file handles which store the state and name of a file. The associated filehandle doesn’t need to be kept open and will be reopened on demand. Hardlinked files are recognized and opened only once.
All file access is done by memory mapping to reduce data copies between userland and kernel. Moreover the kernel becomes responsible to schedule paging (which will be augmented by lumiera) to make the best use of available resources. Memory is mapped in larger windows of reasonable sized chunks, possibly overlapping each other. Clients may request a specific continuous set of data from the file to be accessible as memory block.
Manages serveral classes of threads in pools. The threadpool is reasonable dumb. Higher level management will be done by the Schedulers and Jobs.
While on itself just a thin interface and adaptation layer forwarding calls to the primary vault facilities, the Engine Interface is the primary point of service accessed by Steam-Layer to use the vault layer services for rendering content.
The Engine Interface is cast in terms of an calculation stream entity. This is a stream of expected and ongoing frame calculations for multiple channels, to be managed as a compound. The calculated frames will be delivered into an output slot right away. No assumptions are made regarding the ordering of these individual calculations — they may be performed in parallel, constrained by input and resource prerequisites solely.
For the actual processing, calculation streams need to be translated into individual calculation jobs to be scheduled. For each uniform segment of the effective timeline, the typical recursive descent call characteristic for pull processing results in a Job Ticket.
This structural descriptor of the actual calculations to be performed is the base for creating individual jobs: Already specialised for a distinct segment of the effective timeline and tailored for the needs of a given calculation stream, the job ticket acts as blueprint for the actual jobs to be enqueued with the Scheduler.
Scheduling Queues for different purposes:
Higher priority jobs ordered by a deadline time plus some (negative) hysteresis. Jobs are started when they approach their deadline. Jobs who miss their deadline are never scheduled here.
Background jobs scheduled by priority and timeout.
Timer driven queue which starts jobs at defined absolute times. Timer might be also an external synchronization entity.
a job can be part of multiple queues, the queue which picks them first runs them. When other queues hit a running job they either just drop it or promote its priority (to be decided).
Running Lumiera requires a lot different resources, such as CPU-Time, Threads, IO Bandwidth, Memory, Address space and so on. Many of this resources are rather hard limited and the system will return errors when this limits are hit, but often one does not even reach this hard limits because performance will degrade way before coming into the realm of this limits. The goal for Lumiera is to find a sweet spot for operating with optimal performance. Thus we have some facilities to monitor and adjust resource usage depending and adapting to the system and current circumstances.
Collects statistic about resource load, helps to decide if job constraints can be fulfilled.
Things to watch: * cpu utilization * memory usage (swapping, paging) * I/O load, latency
resources need to be distributed among a lot subsystems and jobs. Each of this component can become part of a budgeting system which accounts resource usage and helps to distribute it. Resource usage is only voluntary managed.
Handles system errors related to resource shortage. There are several classes of resources defined. Other subsystems can hook in functions to free resources. Has multiple policies about how aggressive resources should be freed.
If no one cares it does a final abort(). So all systems should hook better recovery here in!
Conceptually, the application is structured into layers. However, as actual runtime entities we rely on a small number of Subsystems, which can be started and stopped and have well defined dependencies:
Engine
Session and Builder
Play and Render Subsystem
Lumiera GUI
Script runner planned
Renderfarm node planned
Lumiera’s main()
function defines an Application Object, which in turn operates
the Subsystem Runner. Each Subsystem implements a descriptor with the necessary
hooks callbacks to be started and stopped when necessary.
Beyond parsing the commandline and triggering start of the subsystems, the Application Object also issues Lifecycle Events. Individual components can register a callback to be invoked when this happens. Currently (2018) the following events are known (see include/lifecycle.h):
ON_BASIC_INIT
ON_GLOBAL_INIT
ON_GLOBAL_SHUTDOWN
ON_EMERGENCY_EXIT
→ see here for
explanation in detail
(somewhat preliminary and messy, yet still valid as of 2018)
→ more
documentation
(likewise incomplete, but we’re getting there, eventually 2018)
tbw
tbw
This is a “whiteboard” system, allowing implementation-level services to publish some piece of information as advice, while other parts of the system may pick up this advice just by a name token, without requiring a direct knowledge of the original advisor. The Advice System is a singleton service maintaining a lookup and registration data structure. Individual piece of advice elements are stored as value copy. Publishing new advice requires locking, but accessing advice is lock-free (actually there needs to be a memory barrier “somewhere”, otherwise the advice requesting client might not see new advice)
Advice is organised into categories, based on the type of the advice item and some additional symbolic identifiers. Actually these are syntactically represented similar to the atoms of a rules based system (“Prolog syntax”). Currently (2010) only ground terms (completely static symbols) are supported. But the intention is to extend the system to allow for variables in these terms. This will turn the matching of advice provisions and requests into an unification, allowing the advice item to be parametrised.
tbw
tbw
tbw
The Lumiera support library contains lots of helper functionality factored out from the internals and re-used. It is extended as we go.
The header lib/util.hpp defines some shortcuts heavily used throughout the code base. The idea is to highlight a common semantic meaning, while hide differentiation on the technical details.
indicate a missing value, irrespective if this is a NULL pointer, an empty string or an empty container. Several of our custom wrappers also support this notion.
indicate that a value is somehow contained within a collection, irrespective if this is a set, a map, a vector or string (substring test)
make any string usable as identifier.
In a similar vein, the header lib/util-foreach.hpp provides a generic
“for-each-element” mechanism, which works for all STL containers, but
also for all Lumiera Forward Iterators. The loop body is provided
as a functor. In case this functor is a predicate (boolean result),
the and_all
and has_any
functions allow to test for conjunction
and disjunction.
General purpose Locking is based on object monitors. Performance critical code in the Vault uses mutexes, condition vars and rwlocks direcly. Intentionally no semaphores.
C++ locks are managed by scoped automatic variables
C code uses macros to wrap critical sections
Time values are represented by a family of opaque date types
with overloaded operators. The implementation is based on gavl_time_t
,
an integral (µsec) time tick value. Thus, the arithmetic on time values
and time spans is limited and any Time handling and conversion is
centralised in library routines.
We distinguish between time values and a quantisation into a frame or sample grid. In any case, quantisation has to be done once, explicitly and as late as possible. See the Time handling RfC.
The Lumiera library defines several flavours of time values. All of these internal time values have in common that they are opaque and not directly related to any human readable or external (wall clock) time. Moreover, most of these time values are immutable, yet there are two mechanisms to allow for changing time values (TimeVar and Mutation).
Special flavours of these time values additionally carry the reference to an (frame) alignment grid, while being time value entities in all other respects. But only those quantised time values expose functions to convert the internal opaque time into a human readable or externally relevant time format — including SMPTE or frame counts.
Thus, any usage of time values is forced to refer to such a time alignment
grid explicitly, at least when leaving the realm of the internal opaque
time values. This is the point where the time quantisation is actually
performed, imposing some information loss (as any rounding operation does).
A time alignment grid is exactly that: a set of functions to perform
this lossy conversion. Implicitly this involves the definition of an
time origin (reference point where the external time is zero), and
typically this also includes the definition of a frame rate (but
in the most general case, this frame rate might be variable and
change at various places of the time axis). Consequently, all time
grids are Assets and defined as part of the concrete session.
Historically, Time Code was seen as the foundation of any film editing. Similarly, the first generation of editing systems used Time Code as a foundation. Today, we consider such a design as questionable.
Lumiera takes a different approach here: Time code is reduced to a mere mode of presentation, i.e. a formatting of existing time values. It is void of any substantial meaning. To the contrary, the operation of frame quantisation (see above) is considered to be fundamental, causing a irreversible loss of information. The design of time handling chosen within Lumiera forces you to decide on using a specific time grid, prior to being able to format an internal (opaque) time value in any kind of time code. And only as late as when actually retrieving this time code formatted value, the actual quantisation (grid alignment) happens.
In practice, establishing a time grid requires knowledge about the output format. Thus, an (sufficiently resolved) output designation is required to perform any grid alignment a and time code formatting. Typically this happens when a timeline or similar part of the High-Level-Model is connected to a concrete output or an global bus defining a frame rate already. The model contents as such are frame rate independent.
The following time code formats are supported, both for programmatic access and display within the GUI
frame count
SMPTE
SMPTE drop-frame TODO as of 2011
hours:mins:secs TODO as of 2011
fractional seconds TODO as of 2011
musical bars:beats TODO as of 2011
As a corollary, as any rendering is based on frame numbers, it requires an output connection or something similar to establish a frame grid.
As a Rule, Exceptions + RAII are to be preferred over error codes and manual cleanup. At external interfaces we rely on error states though.
Exceptions can happen everywhere and any time
Exceptions and Errors shall only be dealt with at locations where it’s actually possible to handle them (the so called “fail not repair” rule)
API functions are categorised by error safety guarantee
EX_FREE functions won’t raise an exception or set an error state, unless the runtime system is corrupted.
EX_STRONG functions are known to have no tangible effect in case of raising an exception / error state (transactional behaviour).
EX_SANE functions might leave a partial change, but care to leave any involved objects in a sane state.
Raising an Exception creates an error state — error states can also be set directly per thread.
Error states are identified by pointers to static strings.
Error states are thread local and sticky (a new state can’t be set unless a pending state got cleared).
Typically, when an error situation is detected, the error will be categorised by throwing the appropriate exception. Exceptions provide a mechanism to attach the root cause. The classification happens according to the relevance for the application as a whole — not according to the cause.
lumiera::Error
root of error hierarchy. extends std::exception
error::Logic
contradiction to internal logic assumptions detected
error::Fatal
(⤷ Logic
) unable to cope with, internal logic floundered
error::Config
execution aborted due to misconfiguration
error::State
unforeseen internal state (usually causes component restart)
error::Flag
(⤷ State
) non-cleared error state from C code
error::Invalid
invalid input or parameters encountered
error::External
failure in external service the application relies on
error::Assertion
assertion failure
Please be sure to clear the error state whenever catching and handling an exception.
Errors states get declared in headers with the LUMIERA_ERROR_DECLARE(err)
macro.
A matching definition needs to reside in some translation unit, using the
LUMIERA_ERROR_DEFINE(err, msg)
macro. There is no central registry, any component
can introduce its own errorcodes but must ensure that the error identifier is
unique.
There are two helper macro forms for setting up error conditions, one is
LUMIERA_ERROR_SET..(flag, err, extra)
and the other one is
LUMIERA_ERROR_GOTO..(flag, err, extra)
. Each for different logging levels.
The SET
form just logs an error and sets it, the GOTO
form also jumps to
an error handler. Both take a NoBug flag used for logging and a optional
extra
c-string.
const char* mayfail() { const char* ret = foo(); if (!ret) LUMIERA_ERROR_GOTO (flag, FOO_FAILED, 0); if (!bar(ret)) LUMIERA_ERROR_GOTO (flag, BAR_FAILED, 1, lumiera_tmpbuf_snprintf (256, "foo was %s", ret)); return "everything ok"; /* cleanup in reverse setup order */ BAR_FAILED1: lumiera_free (ret); FOO_FAILED0: return NULL; }
Deliberately, Lumiera stays below the level of utilising a dependency injection framework. Consequently, we access most services by type name, pulling up a single implementation instance on demand. Yet some aspects of the Singleton Pattern are known to be problematic. Rather than placing this singleton lifecycle logic into the individual implementation classes, we use a singleton factory, to manage the instance creation and lifecycle of the singleton or service instance separate from the singleton class itself. Singleton initialisation happens lazily with this approach, and thus needs to be protected in a multithreaded environment; we use Double Checked Locking with std::atomic for this purpose.
Singletons need to be written in a way to cope with that somewhat nondeterministic environment.
They will be created on-demand, and be destroyed eventually, in the application shutdown phase.
This results in the general policy that within Lumiera, performing any “business” code in the
application shutdown phase (after exiting main()
) is strictly prohibited. Generally speaking,
destructors must not perform any significant work and are are expected to be failsafe.
By convention, when clients are expected actively to access the singleton instance,
the interface class holds the singleton factory as a public static member with the
name instance
. This allows clients to write SomeService::instance()
to get a
reference to the implementation.
Actually, what appears as a mere “singleton” here is a form of Dependency Injection, and can easily be extended to cover other forms of dependencies and other types of lifecycle. Our experience indicates that this is rarely necessary, but when it happens, it’s essential. Lumiera’s Dependency-Factory thus offers a configuration mechanism, which allows to switch those modes of operation for individual types; the client consuming the “singleton” remains unaware of such special setup.
Moreover, the Dependency-Factory also offers a mechanism to “push aside” the existing singleton or serviec instance and shadow it temporarily with a mock implementation. Again, it turned out that such mocking mechanisms are used only occasionally and rarely required. Likely this is a result of Lumiera being developed test-driven — things are being written mostly in a unit test friendly way.
→ more about dependency handling
tbw
tbw
Iterators serve to decouple a collection of elements from the actual data type implementation used to manage those elements. The use of iterators is a design pattern. → see detailed library documentation
Within Lumiera, we don’t treat Iterator as a base class — we treat it as a concept
for generic programming, similar to the usage in the STL. But we use our own definition
of the iterator concept, placing the primary focus on interfaces and decoupling.
Our “Lumiera Forward Iterator” concept deliberately removes most of the features
known from the STL. Rather, such an iterator is just the promise for pulling values
once. The iterator can be disposed when exhausted — there is no way of resetting,
moving backwards or doing any kind of arithmetic with such an object. The _exhausted
state can be detected by a bool
conversion (contrast this with STL iterators, where
you need to compare to an end
iterator). Beyond that, the usage is quite similar,
even compatible to std::for_each
.
We provide a collection of pre defined adapter templates to ease building Lumiera Forward Iterators.
a generic solution using a iteration control callback API
the lib::RangeIter
just wraps up a pair of iterators for “current position”
and “and” — compatible with the STL
there is a variant for automatically dereferencing pointers
plus a set of adapters for STL containers, allowing to expose each value, each key, distinct values and so on.
Iterator Adapters are designed for ease of use, they don’t conceal the underlying implementation (and the actual type is often quite convoluted).
To the contrary, the lib::IterSource<TY>
template is an abstract base class.
This allows to expose the promise to deliver values through any kind of API, without
disclosing the actual implementation. Obviously, this requires the use of virtual
functions for the actual iteration.
Again, there are pre-defined adaptors for STL containers, but the actual container is concealed in this case.
Iterators can be used to build pipelines. This technique from functional programming allows to abstract away the actual iteration completely, focussing only on the way individual elements are processed. To support this programming style, several support templates are provided to build filtering iterators, transforming iterators, to pick only unique values, to take a snapshot on-the-fly etc. There are convenience builder functions for those operations, figuring out the actual source and destination types by template metaprogramming.
As an extension to the iterator framework, the pipeline builder is able to wrap arbitrary source iterators, decorate them with filters and transformations provided as Lambda, and finally package all together into a self contained new iterator (value object). This is a rather new feature (as of 2017) and we still need to determine its viability, yet it proved helpful when implementing certain kinds of search algorithms on top of an abstracted data source exposed as iterator. → lib/iter-tree-explorer.hpp
Formatting values with printf
is a notorious source for intricate errors.
Additionally, using (s|n)printf
can be clunky in practice, and it doesn’t
support custom defined string conversions, which are an important diagnostic
aid when working with objects. We might
use Boost, which provides the
boost::format
library to address those problems. Unfortunately including this
header-only library solution incurs a significant overhead, both in terms of
compilation time and code size. While maybe still acceptable at the implementation
level, using boost::format is thus certainly a “no go” for any code residing
in headers frequently included.
To work around these problems, we provide a front-end wrapper, defined in lib/format-string.hpp. This allows to keep the actual boost::format based implementation confined to a single translation unit, while still being able to use all primitive types as usual with boost::format or printf. Additionally, our frontend automatically invokes a custom or built-in string conversion, if applicable, it dereferences pointers and catches all errors encountered while in formatting. So it’s well suited for usage in error handling code.
#include "lib/format-string.hpp" using util::_Fmt; double total = 22.9499; const char * currency = "€"; cout << _Fmt("price %+5.2f %s") % total % currency << endl;
boost::format is known to be about 10 times slower than printf — best to avoid it for performance critical code. |
This pervasively used handle type is based on the reference counting
smart-pointer type of C++ (boost::shared_ptr
and C++11). Typically
this also includes a pointer to some kind of implementation service.
Yet still, handles based on lib::Handle<TY>
should not be confused with
smart pointers. Rather, we use the ref-counting mechanism to invoke a custom
clean-up callback when the last handle goes out of scope. Typically, the
implementation service is kept entirely opaque, while the copyable handle
objects also implement a front-end interface for client access.
tbw
When implementing services, frequently we encountered the situation that a manager object creates and owns some component elements or sub-services. The library provides two special collections, which also take ownership of their contents and care for automatic clean-up. Especially, contrary to the STL containers, those custom containers support use of non-copyable object (as a rule, all objects with reference semantics are defined non-copyable in Lumiera).
the ScopedPtrVect
is a vector taking ownership of heap allocated objects
the ScopedCollection
is a fixed-size collection, holding all the child
objects within a single (heap allocated) storage block
There is a family of holder objects, all based on placement-new of the contained object into an embedded buffer. The purpose is to “piggyback” an object inline, without the need for heap allocated storage. Frequently the motivation for this usage pattern is type erasure: the detailed knowledge context used to build some kind of object is discarded prior to further use, relying on generic information and the hidden parametrisation henceforth.
The C++ language has direct support for value semantics and allows to build value objects to be treated as first class citizens. Unfortunately this doesn’t fit well with the chosen approach to object orientation, where polymorphism relies on reference semantics. Thus, most of the fundamental design patterns drive us into having an object manager somewhere hidden within the implementation level, to manage the memory for maintaining the subclass instances to be concealed at the usage site.
To avoid this dilemma, we utilise the technique of the opaque holder to provide objects with value semantics, while actually placing the instance of a subclass into the inline buffer. Clients access this embedded object by automatic type conversion to the interface type, which gives us polymorphism. While the definition of such a beast is quite involved, the runtime overhead is surprisingly low. When compared with standard polymorphism, creating objects and invoking operations has zero overhead, while copying and assignment incur the cost of an additional virtual call, assuming the target objects cooperate by mixing in a copy management interface.
a minimal interface for an array-like entity, but exposing only references
to the contained elements. Obviously this means to use a virtual call for
the subscript operation. This interface allows interfaces to expose something
array-like without committing to a specific implementation type for the
exposed elements within the “array”. The Lumiera library provides a set
of standard implementations for this lib::RefArray
interface, including
a vector based and a directly array based variant.
Generating 128 bit non cryptographic strong unique identifiers.
having an alternative representation to store a pointer
may be extended for a strong (slow) and a fast (weak) variant in future
Combines an user readable ID, a (compile time) type tag and a hash-ID. The latter is based on the symbolic ID and the type tag, which is discarded at runtime (type erasure)
planned a system of per-type lookup tables, based on EntryID
, together
with an type specific access functor. Together, this allows to translate
transparently and typesafe from symbolic ID to object instance, which
is an prerequisite for integrating a rules based system. Besides, these
tables allow unique IDs per type
→ more details about this concept in the TiddlyWiki
Lumiera utilises several custom allocators, each tailored to a specific purpose. All these allocator-frontends share a common pooling allocation backend
currently (as of 2011) the low-level pooled allocator within the backend isn’t implemented; instead we do just heap allocations. See Ticket #231 |
This allocation scheme is used within the context of the Builder and the Low-Level-Model. The predominant usage trend in this realm is to create and wire a family of small objects right after each other, within a build process. These objects are intended to work together and will be discarded all at once, after hot-swapping a new version of that model segment.
This allocation framework is used at various places when a large number of similar objects is expected to be coming and going. New objects are placement-constructed into the allocated space and immediately wrapped with a ref-counting smart-ptr to manage ownership.
Based on the allocator interface of the STL, allowing just for plain allocations and de-allocations without any further instance and lifecycle management. Currently (as of 2011) this allocator isn’t used much — it is conceivable that later we’ll detect some specific STL based containers to be performance critical with respect to allocation.
Fast memory pools for moderately small static sized allocations in highly dynamic situations.
optimized for cache locality
supporting a destructor callback to free all objects
Provides a small number of round robin buffers which need not to be freed. Must only be used locally when no deep recursion may use tmpbufs. Using these wrong (recursively) will result in corrupted data.
Very fast and efficient from smallest too hugest allocations. No need to care for free()
tbw
tbw
tbw
tbw
ppmpl.h
Some wrapers for the C memory management functions malloc, calloc, realloc and free which never fail. In case of an error the resourcecollector in the vault is invoked to free resources or doing an emergency shutdown.
Safe wrapers for some string functions from the C-library which also never fail. NULL strings are propagated to "" empty strings.
Just a macro for simplifying vtable function calls VCALL(Handle, function, arguments…) translates to Handle→vtable→function (Handle, arguments…)
The user is responsible for setting up a vtable
member in his datastructures
this macro does some NoBug checks that self and function are initialized.
Self optimizing splay tree
Advantage: * can be iterated * very fast in situations where few common elements are queried most often
Disadvantages * provides no own locking, needs to be locked from outside * (almost) every access is a mutation and needs an exclusive lock, bad for concurrency
Generic B+/B* Tree implementation. Details are provided by a vtable at an actual implementation.
Fine grained (block level) locking with different modes
supports cursors to iterate over the data, in both directions
exact and inexact searched (what’s close before/after something)
Currently defunct, to be revived someday
planned
Cyclic double linked intrusive list.
Single linked variant.
A list where old entries are recycled from the tail, used things are removed from the list (ownership acquired) and released back to the head.
Items not used propagate towards the tail where they will be reused.