I've called this a journal because it is a place that I will keep my writings and ideas. Not everything here is completed, in many ways it's the equivalent of an online notebook. Also, for some strange reason that I cannot fathom the word "blog" seems to grate on me; so a journal it is.

Emesary implementation of C# WPF Application Messaging

I've tried to explain this many ways and it always ends up hard to understand, so this time I'm just trying code.

I've been using this technique since 1996 in various forms and it is always the cornerstone of my architectures. It is simple, extensible and efficient.

This technique will allow decoupled disparate parts of a system to function together, GUI, DB and business logic.

C# .NET Emesary downloads

Base Class Diagram

This is the core of Emesary with one recipient / sender illustrated MainView. All you need to do to get the whole thing to work is simply to implement IReceiver connect to GlobalTransmitter and use GlobalTransmitter.NotifyAll to send messages. That's all there is to it.

Implement the interfaces and construct yourself

To receive messages an object simply needs to implement IReceiver and register itself with the GlobalTransmitter.

    public partial class MenuView  : UserControl, IReceiver
        public MenuView (MenuMatrixEntities ds)

Connecting buttons

Using good old fashioned callbacks - except all we need to do is to launch and event, and eventually this event will land back at our Receive message in the form of a SelectRecipe message.

        private void addButton_Click(object sender, RoutedEventArgs e)
            GlobalTransmitter.NotifyAll(new Notification(NotificationType.CreateRecipe));

Notice how the backend logic is completely abstracted and the flow isn't decided by the UI. The UI is being driven by the logic.

Implement the event recipient

This is where all events are received. The object here is to handle anything within the class that is directly related to the class itself. It is permitted, even encouraged, to dispatch further notifications from within this method.

        public ReceiptStatus Receive(Notification message)
            if (message.Type == NotificationType.CreateRecipe)
                Recipe new_item = new Recipe();

                 * now notify that a new recipe has been created an select it
                GlobalTransmitter.NotifyAll(new Notification(NotificationType.SelectRecipe, new_item));
                return ReceiptStatus.OK;
            else if (message.Type == NotificationType.SelectRecipe)
                if (message.Value is Recipe)
                    recipe = message.Value as Recipe; // the property takes care of the UI controls
            return ReceiptStatus.Fail;


The Notification class is simple and designed to be inherited from. A notification is an enumerated type and an object. The type identifies the purpose of each notification and the value can be anything and using RTTI / reflection we can easily extract it.


The order in which objects are created and registered with each transmitter is important, as an individual recipient can provide a canonical response of Abort or Finished - at which point the event will be considered by Emesary to have been processed.


Authorisation is one of those difficult areas to get right. Ideally it needs to be built into everything from the beginning at the lowest level, and yet it still needs to be flexible because we may want to change or enhance the authentication code. Traditional solutions such as a global or singleton restrict this, however by using a notification our class can easily find out if it is permitted to do something.

AuthorisationRequestNotification class

The following is the definition of the Notification that is at the core of the system. It simply takes an Action and an Object Identification and requests that something authorise this action.

    public enum AuthorisationAction
        Action // ie. command / select / click
    public class AuthorisationRequestNotification : Notification
        public AuthorisationAction action { get; set; }
        public int position { get; set; }

        public AuthorisationRequestNotification(AuthorisationAction action, string objectId)
            : base(NotificationType.IsAuthorisedTo, objectId)
            this.action = action;

Getting authorised

Within any object it becomes a simple matter to verify if an any operation is authorised, simply send a message and check the return. As by default Emesary will return ReceiptStatus.Fail for any unhandled notification this mechanism will by default not authorise unless the object(s) handling the authorisation request allow it.

if (GlobalTransmitter.NotifyAll(new AuthorisationRequestNotification(AuthorisationAction.View, "CostingView"))
    != ReceiptStatus.Fail)
    // The action is permitted

Inversion of control

There are many ways to achieve IoC; however Emesary solves the problem simply. Taking the lamp and button example we don't need five classes to achieve this, all we need is a Button object a Lamp object and Emesary joining them together.

When I create an object of class Lamp it will automatically work; I can add extra buttons and lamps at will and connect them easily and simply. In the example that follows I've used LampToggle, as to use LampOn and LampOf notifications requires that the button also receives notifications and handles them to allow for more than one button where the button needs to show the current state.

    public partial class Button  : IReceiver
        public Button ()
        private void addButton_Click(object sender, RoutedEventArgs e)
            GlobalTransmitter.NotifyAll(new Notification(NotificationType.LampToggle, "Basement-Lamp"));

    public partial class Lamp  : IReceiver
        public string lamp_id { get; set; }

        public Lamp (string id)
            lamp_id = id;

        public ReceiptStatus Receive(Notification message)
            if (message.Type == NotificationType.LampToggle 
                && messave.Value == lamp_id)
                 // do something to turn the lamp on. 

Class diagram worked example

Other benefits


The GlobalTransmitter is a singleton that provides an easy and quick way to connect an entire application up without really having to worry about where to place the individual Transmitters and how to register everything.

It is possible to use Transmitters on a per-form basis, but there are very few times that this complexity is either required or useful

GlobalTransmitter works well and is simple covering 99% of requirements

There are times when it is necessary to have more than one transmitter, for example

Implement the interfaces and construct yourself

To receive messages an object simply needs to implement IReceiver and register itself with the GlobalTransmitter.

Further reading

Emesary design notes

Building a modern application in .NET

Recently I was fortunate enough to be in the position of having a completely free hand to choose the technologies for a desktop application, which is a double edged sword. The only technical brief was that it had to be a Windows Desktop application

So, where to start. Firstly, and quickly, I ruled out MFC and C++ Builder / Delphi (in a turnaround from 5 years ago where it would have definitely been C++Builder).

It had to be either .NET or Java, and given that the target platform was Windows, and only ever going to be Windows¹ it had to be .NET. The reason for .NET is based on the experience of building a complex Java application (see Java Avionics Training Platform). Java is great at what it does, but it is never long before stumbling across something that can't be done because Java is one small step too far away from Windows.

That meant it had to be C# .NET.

So it was either WinForms or WPF, which took a lot of research and quick prototypes before deciding that WPF was the only really viable possibility given the choice of building something new.

That meant it had to be WPF, C# .NET

Now all I had to do was to figure out how to implement the DataModel. I'm a big fan of ADO.NET as it's easy to fit direct into a data layer, but whilst looking into WPF I discovered The Entity Framework In Layered Architectures and rapidly came to the conclusion that the Entity Framework sat better between the database and WCF/WPF and it felt better and there was a lot less glue to write.

That meant it had to be EF, WPF, C# .NET

Having now built most of the application (DB->EF->WPF desktop application) I'm still happy that I went with the EF & WPF. The EF does most of the grunt work for me and I think it maps really well between relational and object oriented. It's also been a joy extending the model and having a datamodel which features a hierarchical data structure that is easy to traverse when compared to the difficulties of doing this in SQL.

This process proves that it is still important to validate of all requirements before committing to any architecture. I built a few simple proof of concept applications that were used to validate that my requirements could be met.

¹ - Until enough demand arrives for other platforms.

C# WPF Instructor Station Development notes

Developing from ground up a new IOS using WPF & C#

Basic Design

Event bus

Previously for the event driven inter object communications in the Java MFD training I’d used a derived version of ARINC429. This worked well although it was clumsy in use.

Basic structure was to have a Message, within which there were multiple MessageDataItems.

The standard way of receiving a message is to implement the interface and then decode each specific message type, and then iterate through the contained messages. This allows for great flexibility and the ability to have complex messages which are flexible and ggeneric. The identified problem is that this is too generic and could possibly be better served by extended the message class and containing that as is implemented in Emesary implementation of C# WPF Application Messaging

The old way

    public long Receive(Message message)
        if (message.getSystemIdent().Equals(SystemIdent.FlightModule))
            foreach (MessageDataItem mr in message.getMDIList())
                switch (mr.getItemType())
                    case MessageDataItem.Aircraft_Altitude:
                        ac_altitude = mr.getDValue();
                    case MessageDataItem.Aircraft_Latitude_Degrees:
                        ac_lat = mr.getDValue();
        return 0L;

and to send a message

        Message m = new Message(SystemIdent.GlobalPositioningSystem);

m.addMessageItem(new MessageDataItem(MessageDataItem.Aircraft_Altitude,ac_altitude)); m.addMessageItem(new MessageDataItem(MessageDataItem.Aircraft_Latitude_Degrees, ac_lat)); m.addMessageItem(new MessageDataItem(MessageDataItem.Aircraft_Longitude_Degrees, ac_lon)); m.addMessageItem(new MessageDataItem(MessageDataItem.Aircraft_Heading, ac_heading)); m.addMessageItem(new MessageDataItem(MessageDataItem.Aircraft_TAS, ac_true_airspeed)); ExecSystem._Transmitter.NotifyAll(m);

Using Emesary

When beginning to redevelop / refactor the existing code it became obvious that the above structure could and probably should be replaced for internal comms within the same application. To bridge the message bus across to a different application it’ll be a case of packing and / or unpacking, but within the same application space we can transmit and receive using objects – which is quicker and easier.

Databinding and datapool

What is going to be neat if I can get it to work is to allow databinding directly from the XAML to elements in datapool. Then the Simulator Host communications module can modify the bound items directly – and the rest will be taken care of automatically. Having only seriously used databinding with the EF before it may be hard to get this to work, but what I’d like to have is:

   <IOS:DigitalReadout Datapool="{Binding RAXLAT}" Label="Latitude" />;

Also probably will consider binding the Label to a language system so that we can translate it.


Pages are built from XAML. Design objective here is to completely avoid the need for any code behind and to partition the pages out so that they can be built using either Visual Studio, Expression Blend, or any other XAML editor.

All of the IOS elements such as Buttons, Readouts, Plots, Maps will be implemented in one or more Custom control libraries.

IOS Processes and components

The IOS must be split into components, to provide the required parts. A monolothic approach has been proven not to work or to be maintainable.

To achieve this a basic IOSProcess class is defined. The requirement is to allow this to be linked into an assembly, or to be a standalone process (as requirements dictate). To achieve this flexibility we will be relying on Emesary to provide the communications between all processes.

Class diagram of IOS Process

Details of inter-process communications within the Instructor Station

As you can from the above diagram Emesary is a core component of the ExecNode (name subject to change). What is happening is that each IOS must implement the required interfaces and by doing so, and becoming part of the Emesary Transmitter located in the Exec each process is isolated from the other.

In the first instance the Exec implementation of the Transmitter / Recipient will be a simple forwarder. When working in a multi-process environment this will become an asynchronous operation. This is slightly contrary to the Emesary design and will require careful consideration to ensure that the Emesary implementation does not become a bottleneck. In the first instance message routing and filtering together with a consistent approach will be used to get this right.

A simple example is the datapool interface. Traditionally datapool has been a block of shared memory and it may remain like this, however the interface to an individual symbol needs to be isolated from this as the access to datapool could be implemented by IPC using Emesary.

The future.

As the project progresses I will keep this page updated.

C++ Basic Framework Design for a Simulation System

This set of code is derived from the work performed building the Direct2Learning Java Avionics training platform (now defunct).

It uses multiple inheritance, however base classes are pure virtual effectively to provide interfaces.

The code sample is missing the complicated ExecScheduler which is where the scheduling of modules is performed. Writing the ExecScheduler is reasonably complex - it needs to work with threads and provide timing. I've got a Java version of this which is at the end for illustration.

This code is a prototype model for proof of concept - it compiles but won't run because there are too many required subsytems missing.

#include "stdafx.h"
using namespace std;

// Fundamental element of simulation - an ExecModule is a concise unit of simulation work. It will
// be called by the main real time executive at the frequency specified by the getExecRate() method.
// Communication between modules is either via datapool, or by using BusMessages.
class ExecModule
	virtual bool initialise(long time_ms) = 0;
    virtual long run(long ms)  = 0;
    virtual long getExecRate()  = 0;
    virtual string getModuleDescription() = 0;

class GeoCoordinate
	GeoCoordinate(double lat, double lon, double alt);

class Model
	virtual void DifferentialEquations() = 0;
	virtual void AlgebraicEquations() = 0;

class State
	Value Prime();
	Prime(Value &v);
    State operator *(State c);

class AircraftModel : public ExecModule, public Model
	State x1, x2;
    State x3;
    InputDouble u;
    InputBool flag1, flag2;
    AlgebraicDouble  x1x2;
    Model tw1, tw2; // engine
    Model gear;
    Model isa;
    TrimRoutine HorizontalFight;
    TrimRoutine OnGround, General;
    ConstantDouble c1, c2;
    ConstantInt : ci1;


	virtual void DifferentialEquations()

	virtual void AlgebraicEquations()
		x1x2 = x1 * x2 + x1.Prime();

public: // Required for ExecModule
	string getModuleDescription()
        return "Aircraft Model";

    long getExecRate()
        return 33L;//ms (30hz)

    long run(long ms)
        return 0L;

	bool initialise(long time_ms)
		// called by the Exec (in sequence) when initialisation is required.


class SimLoad
// exec modules to load
    class Model *aircraft_model;
    class Model *engine_model;
	class Model *aerodynamics_model;
    class GPSSimulator *gps;
    class FeaturesDataProvider *runways;
    class ArincDB *arincDB;
	class ExecSystem *execSystem;

		engine_model = new EngineModel();
		aerodynamics_model = new AeroDynamicsModel();
        aircraft_model = new AircraftModel();
        arincDB = new ArincDB();
        gps = new GPSSimulator();

        // ensure that the simulated systems are loaded in the correct
        // sequence. Notice that the exec system provides two schedulers which
		// we select manually to allow for threading. Each thread within the exec is 
		// synchronised at the start of each frame (iteration) however within each frame
		// each thread is free running so care needs to be taken when scheduling dependant
		// modules across different threads.

        runways = new ArincRunwayProvider(arincDB);

		execSystem.start(); // 

int _tmain(int argc, _TCHAR* argv[])
	return 0;

 * Title: System Executive Module Scheduler
 * Description: Schedules the execution at the rates specified by the modules {@link ExecModule}. Monitors
 * for module and system overruns and notes these.
 * There could be many exec schedulers running; however any individual ExecModule must only be invoked by one
 * scheduler. This is a very simplistic approach to scheduling but avoids the problems with thread safety, and is
 * therefore more efficient, less susceptible to race conditions, etc.
 * Using a thread-based approach is akin to anarchy by comparison as there is little overall control over the
 * sequencing and timing. With the simulated systems that we have the sequencing is important, and also the concept
 * of a frame based iteration such that the modules are all executed once per frame in the right sequence.
 * The disadvantage of this method is not automatically taking advantage of extra (multiple) CPUs; however this
 * is outweighed by the advantage of being able to control the schedule of operating process.
public class ExecScheduler
    Timer 		timer;
    ExecTimer 	ET;
    Vector 		ModuleList;		// list of modules to execute

 * rate at which this thread is iterated which is equivalent to the maximum frame rate of the system.
 * (reliant on the java thread callback accuracy)
    private long ms_rate = 1000 / 20; // hz
    timerExec timer_exec;
    static Date current_date = new Date();

    private class sModule
        private ExecModule	EM;
        long 				last_exec_time;
 * Intialise, set up exec timer and make note of related execmodule
 * @param _m ExecModule
        sModule(ExecModule _EM)
            last_exec_time = ET.getElapsedTime_mS();
            EM = _EM;
        ExecTimer et = new ExecTimer();
        long peak_time = 0, avg_time = 0, overruns=0;

 * Entry point for exec. handle scheduling, monitor run time.
 * @return long
        long run()
            // use actual elapsed time, as we cannot guarantee hard timing
            // and accurate scheduling.
            long timeval = et.getElapsedTime_mS() - last_exec_time;
            if(timeval > EM.getExecRate())
                last_exec_time = et.getElapsedTime_mS();
                long rv =;
                long exec_time = et.getElapsedTime_mS() - last_exec_time;
                peak_time = Math.max(exec_time, peak_time);
                if(avg_time == 0)
                    avg_time = exec_time;
                avg_time = (avg_time + peak_time) / 2;

            return 0;

 * Initiate the exec module intialisation
 * @param time long
 * @return long
        long init(long time)
            ExecTimer et = new ExecTimer();
            System.out.println("ExecInit: " + EM.getModuleTitle() + "@" + 1000.0 / EM.getExecRate() + "hz, took " + et.getElapsedTime_mS() / 1000.0);

            return 0;

 * Count overruns for this module. A module is considered to have overrun if it takes more than its frequency
 * timeslice and therefore slews the next exec.
        public void notify_overrun()

 * dumpStats
        public void dumpStats()
            System.out.println("Exec: "+EM.getModuleTitle()+ " peak " + peak_time
                               + " avg_time " + avg_time + " Overruns "+overruns+" Rate " + 1000/EM.getExecRate() + "hz");

        public ExecModule getModule()
            return EM;

        public long get_avg_time()
            return avg_time;

    public ExecScheduler()
        ModuleList = new Vector(100);
        timer = new Timer();
        ET = new ExecTimer();

    class timerExec extends TimerTask
        private long ms_rate;

 * timerExec
 * @param _ms_rate long
        public timerExec(long _ms_rate)
            ms_rate = _ms_rate;

 * run - iterate through the list of modules and execute then according to the rate specifiers.
        long fc = 0; // frame counter
        long spare_ft_percent = -1;
        long overrun; //overrun counter.
        public long getSpareTimePercent()
            return spare_ft_percent;

 * Heart of the scheduler. Called by the Java runtime thread at the specified frequency. Iterate through the loaded
 * modules and run if necessary (handled by sModule). Collect stats for monitor.
        public void run()
            Enumeration E = ModuleList.elements();
            long ms_since_last_frame = ET.getElapsedTime_mS();
            long variation = ms_rate - ms_since_last_frame;

// Set the current date here for efficiency and possible record replay/snapshot type of operations.

            current_date = new Date();

            if (ms_since_last_frame > ms_rate && Math.abs(variation) > 20)
//               System.out.println("Overrun #" + overrun + " v" + variation + " ms" +ms_since_last_frame+"["+ms_rate);

                sModule sm = (sModule)E.nextElement();
                if (ET.getElapsedTime_mS() > ms_rate)
            // now calculate the frame timing, %spare and average spare frame time
            spare_ft_percent = 100 - ( (ET.getElapsedTime_mS() * 100 / ms_rate));

            if(ET.getElapsedTime_mS() > ms_rate)
                spare_ft_percent = 0;

     * dumpStats
    public void dumpStats()
        Enumeration E = ModuleList.elements();
        System.out.println("Exec Scheduler stats");
        long avg_time = 0;
            sModule sm = (sModule)E.nextElement();
            avg_time += sm.get_avg_time();
        long spare_ft_percent = 100 - ( (avg_time * 100 / ms_rate));
        if(timer_exec != null)
            System.out.println("Exec: Avg free time "+ timer_exec.getSpareTimePercent() + "% " + avg_time+"%");

 * start - init rate, and init exec modules
 * @param _ms_rate is the length in milliseconds of a frame.
    public void start(long _ms_rate)
        if (_ms_rate != 0)
            ms_rate = _ms_rate;

        Enumeration E = ModuleList.elements();
        System.out.println("Exec Scheduler start");
            sModule sm = (sModule)E.nextElement();

        timer_exec = new timerExec(ms_rate);
        timer.scheduleAtFixedRate(timer_exec, 0, ms_rate);

 * start - no rate, so call other method with zero which will init
 * modules but not init the ms_rate
    public void start()

    public void stop()
        if (timer_exec != null)

        timer_exec = null;

 * Adds module that isn't already present to the execution list
 * @param EM ExecModule
 * @return boolean
    public boolean addModule(ExecModule EM)
        Enumeration E = ModuleList.elements();
            sModule sm = (sModule)E.nextElement();
            // pass value of elapsed time since the last frame - not the elapsed time
            // since this last invokation.
            if(sm.getModule() == EM)
                return false;

        ModuleList.add(new sModule(EM));

        return true;

 * Removes a module from the execution list
 * @param EM ExecModule
 * @return boolean
    public synchronized boolean removeModule(ExecModule EM)
        return true;

     * Removes a module from the execution list
     * @param EM ExecModule
     * @return boolean
        public synchronized boolean removeAllModules()
            return true;

 * @return Date in the simulated environment. This is usally the same
 * as the system date; however it could be different, eg. record replay.
    public static Date getCurrentDate()
        return current_date;

Creative difficulties

Sometimes a proposal is rejected that is exactly what the client needs, even when it wasn't what they thought they wanted; and it always reminds me or Mr. Wiggin the legendary fictional architect and his famous outburst that I've reproduced below.

MR. WIGGIN: Yes, well, that's the sort of blinkered, philistine pig ignorance I've come to expect from you non-creative garbage. You sit there on your loathsome, spotty behinds squeezing blackheads, not caring a tinker's cuss for the struggling artist. You excrement! You whining, hypocritical toadies, with your colour TV sets and your Tony Jacklin golf clubs and your bleeding Masonic secret handshakes! You wouldn't let me join, would you, you blackballing bastards! Well, I wouldn't become a freemason now if you went down on your lousy, stinking knees and begged me!

Sometimes comedy mirrors life perfectly. Sometimes I wonder why we bother, then I remember that it's because we love creating innovative solutions, and that it doesn't always work out the way it should at first, so we should continue and find another way to get the right thing done.

Database creation, migration and how it works with source control (SVN) and projects.

Database version control has been something that used to cause me problems, a lot of problems, because it wasn't within the normal controlled sources. This had to change - and it isn't something that is easy to do as it requires discipline because the databases don't really integrate with any source control system. (Correct me if I'm wrong - I'll be very pleased).

My development sources have been controlled since the early 90's, but when Delphi came along I was genuinely stunned - it had a database, a real one, included with it. It was great - I'd been using databases since the late 1980's - but they always seemed to need a large machine (part of the reason I got myself a VAX 3250 - but that's another story).

Now, I've been a huge fan of source control since I first discovered it in 1988 with RCS - it was stunning and when we managed to get the entire system to build from SCCS in around 1989 that was unbelievable - because it was controlled and traceable.

Enter the database and it all started to go astray - not at first - but all of the old problems of versioning started happening with the databases, it hurt and it had to be stopped.

In reality the solution seemed simple. We'd keep an exported copy of the database within the source tree and in theory it should all be cool. Except that it didn't quite work like that - problems started to appear between versions - and eventually the unthinkable happened in that the database became out of sync with the source tree - and worse still it was unrecoverable. This was a major problem - the database that we had on the development server was fine - but the database that was controlled was broken. Despite the changelogs it was impossible to find out what actually happened.

So, a new approach was called for, and after many iterations this is how I would control the database.

Starting from the firsrt moment of development and continuing for ever there is a script - db-create.sql - which does exactly what you'd imagine (starting with DROP DATABASE).

During development up until the first live deployment this is run often - and modifying the DB via other tools is permissible - but under caution to update the script, as otherwise changes will be lost. By ensuring that db-create is run often (and by definition it must create the database in a usable state) we have solved the problem of quick patches to tables and stored procedures.

Once the system reaches a significant milestone (usually deployment or release), then the db-create script is frozen - ensuring that the version matches with the releae version.

After this point any changes have to be made via a new script, equally imaginatively entitled db-migrate.sql. This script is required to take the database from the version of db-create and make it match the requirements for the current version under control. Again it will be executed frequently - except that a database restore from the release version is performed first - the concept here is to ensure that the development database is mirroring the released version and the the migrationprocess works and is acutally usable.

In practice the two scripts often get polluted with auto-generated SQL from one of the many tools that allow database manipulation, whilst this is annoying that's all it is, and can easily be solved by more those more expert with SQL.

The key part of the process is to ensure that the scripts are run frequently - in fact it often helps the development process precisely because the database is a lot cleaner.

Also it is often worth having a db_version table that tracks the current database version, and use your judgement to decide if you want to keep track of the changes that have been applied here - for example in a seperate table - or in the version table.

Designing a plugin architecture for an application

There simply isn't an application or software system that wouldn't be improved by having a plugin architecture - but it seems harder to implement.

Or at least it seems like it would be harder, in fact it's really a redistribution of labour.

Often when talking about software systems or applications we talk about the core. This is often overused and is an all to convenient metaphor for putting a lot of code together.

Sure, there are times when this is necessary, in fact all systems have to have some sort of core, however the defining line between core and non-core is often blurred to such an extent that the core ends up creeping outwards.

One way to prevent this is to design a plugin architecture from the start - and to continually ask yourself the question whether a set of functionality should be in the core or not.

The biggest mistake that is made when designing a plugin architecture is to start differentiating between the actions that plugins will take. This is wrong as your design starts to be artificially constrained from day 1.

By the very nature plugins provide extensibility - so to pre-classify this extensibility into areas limits what the plugins may perform.

This differentiation is often seemed to be needed to define the interface between the plugin and the core, in terms of method, class structure and data.

How it should be done is by using messaging (events). There are many ways of doing this, however the best suited for a plugin is a Class based inter-object communication.

So the class structure for any plugin becomes much simpler and more flexible:

class GenericExtension implements Receiver {
    public string getVersion();
    public void register(MessageSender msg);
    public long getRequiredNotificationTypes();
    public long getReceiveRate();
    public long Receive(Message Msg);

That's all that the plugin needs to provide. During initialisation the application core will discover (via some mechanism) all available plugins. It will then load and call the register method providing as a parameter the MessageSender that this plugin is associated with. This is necessary to allow the plugin to communicate back to the hosting application.

From this point all of the communications with the plugin are via messages - that will arrive at the receive method - based on the requiredNotificationTypes that the plugin provides.

There can be many different MessageSender instances in a more complicated application - with MessageRouters between them to manage the global message load - however in most systems that are not dealing with large (>10000) amounts of usage this is usually not necessary.

Once the plugin is nolonger required it can notify the host of this - or the host can notify it. In either case once acknowledged with success the host can unload the plugin. If a plugin refuses to respond to an unload request the host is responsible for deciding what to do.

This allows the plugin to function as part of the application - receiving messages and transmitting seamlessly - which is fantastic for things such as filters and processing data.

Direct database access from plugins must be discouraged - unless it is appropriate and it usually isn't. So, again the database access must be via messages - again this may seem slow but it really isn't much overhead - the database messages can be routed directly to the database object. Equally it may be more appropriate to perform access via an API - for example to a set of core business functions. In this case a shared library is more appropriate than overloading the message system.

Using shared libraries with plugins and core code can present a versioning problem so in many ways it should be discouraged - ideally a plugin will be self-contained.

Usually the plugin will require a UI - and here lies another set of pitfalls. If portability of the plugin is important then the UI cannot be contained within the plugin - in which case the only solution is a generic UI (which is fine for web systems), or to use a data driven forms based UI which splits the UI into generic components that are requested by the plugin. This method warrants further explanation, but is outside of the scope here.

So, you have plugins that receive messages, a core system that sends messages and a nicely integrated system. When a new item is plugged in it will fit within this. There are still pitfalls - as plugins may interopate badly with each other - something that only testing and design can solve.

However the goal has been achieved - any number of plugins performing any number of different functions in an extensible way - all simple.

may well need to provide a UI and this is a little more complex than it first seems.

Emesary : Efficient inter-object communication using interfaces and inheritance

A technique that I have been using for a very long time to enable the inner workings of code to be cleaner and more decoupled, easier to maintain and extend.

In essence it is nothing that new - event driven systems have been around for a very long time.
What makes this way of doing things different is that it is very lightweight.

Why do I need inter object communications, I already have events from the window system

I've implemented this system a fair few times on many different systems, it is so lightweight and transparent that it doesn't need to affect the whole system.

This technique lets you safely connect disparate tiers in a managed and
predictable way.

Most of the time as the project progresses the rest of the team start to notice and really grasp what you can do with class based inter-object communication.

It differs from the native window events that you get in any windowing system (Win32, X, Qt) because it is always difficult and very unportable to manage and process your own events, often referred to as user events.

What you can't easily do with most of these schemes is to have localised notifications, to pass around objects in the message and to stop half-way through processing.

You can have a very localised messaging system (e.g. on a Form)
or something bigger that is used by a whole system

A worked example of the PostOffice notification system

If you take the simple case of a button to cancel an order. It is usual to have an event handler, often the IDE will provide you with the code and leave the cursor blinking. Start typing and add the code to tell the business object that the order has been cancelled and it's finished. Except that because the order has been cancelled the other buttons to confirm or modify the order should be disabled, and the button to create a new order needs to be enabled. Not a problem, just add the button_NAME->enabled calls.

However this is wrong - for a number of reasons. Firstly one button is controlling others, secondly the code to manage button states starts to become disparate and lastly it should be upto the business object to decide the actions that are applicable and the UI to take responsibility.

So what we need is Tier interoperation, where the business logic can join in with the UI, and tell it what the available options are. Some people argue that the business tier should know nothing about the UI, that it is essential to keep the UI completely seperated and this is a line that shouldn't be crossed.

Tier interoperation - or integration with backend business logic

Of course, that all sounds good in theory - keeping all of the tiers seperated, avoid call backs etc., but in reality it is often so very wrong.

The business logic often needs to be able to tell the UI something - maybe we add some methods, typically something like "order->is_Cancel_Available()" which returns true when an order can be cancelled, but wait - we've just effectively linked the business and UI tiers. Except that we haven't, because the UI doesn't have to call the function and because the business tier can ignore a cancel request - returning false or throwing an exception.

It's also getting worse now. We need to ask the business logic whether the options are available so that we can display the appropriate state. This means that it is possible, nay probably, that a call will be missed and in certain circumstances a button will be shown as enabled when it isn't. Usually messy rather than a big issue - at least in this simple example - but taking things further the consquences can be much worse.

So what we need is to link the business logic to the UI -
so that the business tier can tell the UI that an something has changed.
Using Emesary we can easily achieve this.

Does this allow queuing and asynchrous messaging?

No by design decision after long thought, discussions, peer review. Both queuing and asynchronous operations are something that break the underlying goals.

Firstly any message recipient can have one of four return states:

In practice most of the time Success and Failure can be considered equivalent (more on that later) in that event nofication will continue. Abort and Finished are also closely equivalent in that event nofication will stop.

Can objects communicate across process boundaries

Easily - and still cheaply using shared memory, sockets. However this protocal isn't designed to be automatic, self registry and service discovery, so the processes need to have an arbitrated method of first establishing the connection. Once the connection is established a bridge between the PostOffices is built and event processing can continue.

Cross process boundary communication is simply and only requires two objects

Can objects communicate when located on different systems?

Easily and in much the same was as across process boundaries. although shared memory might not be an option.

Isn't it better to use CORBA, DCOM, RPC, SOAP etc?

For certain things it is much better - but much harder, and more prone to failure.

It tends to be much more under the developer's control - and equally not as good where information needs to be shared between disparate systems, where SOAP (etc.) would be better.


What I've presented here is the outline of a system that really works best within a process, or where a set of shared processes. It was never designed to replace complex mechanisms such as DCOM or CORBA.

Emesary Implementations

PHP emesary.php4.92 KB
zx_postoffice.h6.43 KB
C# .NET version : Emesary.cs2.95 KB

Emesary implementation of a message based real time system

This entire entry is really a set of notes to remind me of an idea I've just had....
There are many ways of implementing an real time executive, i.e. a system that takes responsibility for loading, running, and monitoring individual sections of code.
Sitting here just now looking at some code related to the simulation of an aircraft, specifically the equations of motion, aerodynamics, engines and navigation systems, it occurred to me that there may be a better way of putting together the essential areas, namely

The basic idea, and it is just an idea at the moment, is to try to do this using something similar to my favoured approach for many things (Emesary).

Providing that the modules are registered on the event notification bus in the correct order, then effectively the scheduler and notification can become one procedure. We have one message that is passed around to all modules at 30hz (or whatever required rate). This message is effectively the shared and private data, maybe with some conventions attached. Each module that receives the message performs its required processes, and finishes, whereby the message continues to the next process.

Areas that initially cause me concern are performance and multi threads/processors/systems.
Performance concerns can be easily addressed by simply containing datapool within the message, and passing the message by reference or by handle.

However, I'm falling back on to tried and trusted methods for the multi-threaded approach, and these don't necessarily sit well in the object world, so this needs thoughts. Initially I'm thinking of using something akin to strict value level security to define the write access to each data item, but this is going to make things slower, although maybe it would also gain benefit by allowing event generation (eg. exceptions) based on data item modification. That way the owning module, or RT-exec, could at least know about overwrites, or modules attempting to modify stuff that they simply don't own.

Also, if this works well for a real time system, then maybe it can be used to take advantage of multithreading in traditional systems, needs more thought.

Emesary: Nasal implementation for FlightGear

I’ve been recently looking at how to improve the way that the F-14 is built after starting to integrate the enhancements that Fabien has made to the radar system.

The trouble is that the F-14 is currently all very interdependent and would massively benefit from being decoupled. This is ideally suited to my standard solution to this sort of problem Emesary

Brief introduction to Emesary

Emesary is a simple and efficient class based interobject communcation system to allow decoupled disparate parts of a system to function together without knowing about each. It allows decoupling and removal of dependencies by using notifications to cause actions or to query values.

Emesary is all about decoupling and removing dependecies, and improving the structure of code. Using Emesary you can more easily define the what rather than the how. By using what is essential an event driven system it is easy to add or remove modules, and also for extra modules to be inserted that the rest of the aircraft knows nothing about (e.g. FGCamera or the Walker).

Emesary is ideally suited to bridge the gap between programming languages in a way that is transparent. I’ve had some C++ code talking to C# (using protobuf); the beauty is that neither side needs to know how the messages are being routed, or indeed even where the messages are coming from or going to.

Emesary allows common systems to be implemented on different aircraft without depdencies. If you wanted to you could build an Arinc 429 bus using Emesary. To say this a different way you can design away the dependencies that you get when referencing different aircraft for common systems.

Emesary is contained within emesary.nas which needs to be in $FGDATA/Nasal

Future developments


The concepts are very close between Emesary and HLA. The HLA architecture (as I understand it) will permit us to provide a good mapping between messages within an Aircraft model and messages that need to be transmitted to another federate. Emesary brings an efficient structured method for defining messages within a model so it will be a case of figuring out a translation layer to allow these messages to be sent over the wire.

So I have high hopes for what Emesary will enable models to do in the HLA environment.

Multiplayer bridge

There is a multiplayer bridge for Emesary which allows selected messages to be routed to participating aircraft over the multiplayer protocol (MP). The important thing here is that the Bridge will decide which messages are routed, it will enable bi-directional communications between aircraft.

This will uses a string property to send the messages; incoming messages are processed on a per model basis in each MP client. As with all MP it relies on UDP which is not guaranteed transmission so a bridged message will remain in the MP packet for an amount of time to give connected clients a good chance to receive, even so delivery cannot be guaranteed.

Notification Protocols

Emesary works best when a protocol is correctly designed. By protocol I mean simply the order in which messages are transmitted and received. Notifications within an aircraft model can be modified by recipients – which provides an easy way to populate data. However this method is only suitable for notifications that will only ever be sent locally. If a notification may at a future point be sent to an external system it needs to be designed as an asynchronous request with a corresponding response notification.

The request/response protocol works exceptionally well over a distributed system, and the main benefit of the Emesary way of doing this is that the notifications simply appear at the usual place (within the Receive method). The object does not need to know where the notification has come from, and nothing in the system (apart from the Emesary transmitter) needs to know where to send notifications to.

Worked Example

Often I find that a worked example will illustrate something way better than all of the wordy reference section. So this is what I’ve done.

Automatic Carrier Landing System (ACLS)

The US Navy use something like the AN/SPN-46(V) Automatic Carrier Landing System (ACLS). This is a precision approach landing system (PALS) which provides electronic guidance to carrier-based aircraft and allows them to land in all-weather conditions with no limitations due to low ceiling or restricted visibility.

This is basically a set of radars and radio transmitters that allow aircraft equipped with the AN/ARA-63 receiver group to perform an automated landing. Currently this is implemented in the F-14 as a solely aircraft side system that uses the tuned TACAN channel to locate the carrier and performs very simplistic calculations to provide information that allows a precision carrier landing to be performed.

The problem with ACLS and Carriers in FlightGear

The basic problem is that there are (or can be) many Carriers within the AI system, and the aircraft needs to be detected by each of them as it approaches the carrier. A solution would for the aircraft to scan the property tree to locate suitable carriers, however this is introducing a direct dependency between the aircraft and known carriers. This would mean that it would not be possible to have a ground based installation of an AN/SPN-46.

If we use a scan of the AI tree to find the AN/SPN-4x it means that any new AI scenario could well require all aircraft to be modified. This could be avoided by a consistent design and use of the property tree, but the way I’m going to solve the problem with Emesary is more elegant and will work for any future scenarios.

In the F-14 up to V1.3 this problem has been solved in a rather inelegant way by having to tune the Carrier’s TACAN channel, and that way the aircraft can display appropriate data.

The real solution to this problem is to mimic what happens in the real world. Make it possible for each of the carriers to transmit and detect the aircraft. This is explained in the next section, but it’s basically three messages that flow (msg1)Carrier->Aircraft respond with (msg2)->Carrier responds with (msg3)->Aircraft.

The AN/SPN-46 system will decide if an aircraft is in range, however the aircraft will decide if it is tuned into the right channel. An aircraft that isn’t tuned might be visible on a display on the carrier (if we had such a thing), but no guidance would be received back at the aircraft.

AN/SPN 46 implementation

There is a new module Aircraft/Generic/an_spn_46.nas which is a standard implementation of the device; so all that’s needed is for the Vinson.xml model file to instantiate one of these correctly connected to the model, register it with the GlobalTransmitter; and then all aircraft that fly within the correct range will be able to use the AN/SPN-46.

Once instantiated in the carrier the AN/SPN 46 system will send out a ANSPN46ActiveNotification at regular intervals. This will be received by any aircraft registered with emesary.GlobalTransmitter. When the aircraft receives this notification it should respond with a ANSPN46ActiveResponseNotification which indicates to the AN/SPN-46 system whether or not the aircraft is tuned in, and the aircraft position, heading and forwards velocity in feet per second. If the aircraft is tuned to the right channel (notification.IsTuned = true) then a ANSPN46CommunicationNotification response will be sent, again via the GlobalTransmitter that the aircraft can use to display on the appropriate instruments.

Emesary ACLS Message flow

Adding an AN/SPN-46 to a Carrier model

The AI Carrier model needs to load this inside the model XML. It is as simple as adding the Nasal that follows. Things to note are that a timer is used – but that the system itself suggests the frequency. This is because the system will slow down when no aircraft are within range to 0.1 hz (i.e. every 10 seconds).

            var self = cmdarg();
            print("Model load Nimitz ", self.getPath());
            fn_net = getprop("/sim/fg-root") ~ "/Aircraft/Generic/an_spn_46.nas";
            io.load_nasal(fn_net, "an_spn_46");
            var anspn ="Nimitz", self);
            var an_spn_46_timer = maketimer(6, func {
            print("UNLOAD Nimitz ", self.getPath());

Adding an ARA-63 receiver to an aircraft

Again this is really simple; most of the work is performed by the Carrier System.

This section responds messages and drives the following lights and the flight director bars.

# AN/SPN 46 transmits - this receives.
var ARA63Recipient =
    new: func(_ident)
        var new_class =;
        new_class.ansn46_expiry = 0;
        new_class.Receive = func(notification)
            if (notification.Type == "ANSPN46ActiveNotification")
                print(" :: Recvd lat=",, " lon=",notification.Position.lon(), " alt=",notification.Position.alt(), " chan=",notification.Channel);
                var response_msg = me.Response.Respond(notification);
# We cannot decide if in range as it is the AN/SPN system to decide if we are within range
# However we will tell the AN/SPN system if we are tuned (and powered on)
                if(notification.Channel == getprop("sim/model/f-14b/controls/electrics/ara-63-channel") and getprop("sim/model/f-14b/controls/electrics/ara-63-power-off") == 0)
                    response_msg.Tuned = 1;
                    response_msg.Tuned = 0;
# normalised value based on RCS beam power etc.
# we could do this using a factor.
                response_msg.RadarReturnStrength = 1; # possibly response_msg.RadarReturnStrength*RCS_FACTOR
                return emesary.Transmitter.ReceiptStatus_OK;
# we will only receive one of these messages when within range of the carrier (and when the ARA-63 is powered up and has the correct channel set)
            else if (notification.Type == "ANSPN46CommunicationNotification")
                me.ansn46_expiry = getprop("/sim/time/elapsed-sec") + 10;
# Use the standard civilian ILS if it is closer.
        print("rcvd ANSPN46CommunicationNotification =",notification.InRange, " dev=",notification.LateralDeviation, ",", notification.VerticalDeviation, " dist=",notification.Distance);
                if(getprop("instrumentation/nav/gs-in-range") and getprop("instrumentation/nav/gs-distance") < notification.Distance)
                    return emesary.Transmitter.ReceiptStatus_OK;
                else if (notification.InRange)
                    setprop("sim/model/f-14b/instrumentation/nav/gs-in-range", 1);
                    setprop("sim/model/f-14b/instrumentation/nav/gs-distance", notification.Distance);
# Set these lights on when in range and within altitude.
# the lights come on but it is unspecified when they go off.
# Ref: F-14AAD-1 Figure 17-4, p17-11 (pdf p685)
                    if (notification.Distance < 11000) 
                        if (notification.ReturnPosition.alt() > 300 and notification.ReturnPosition.alt() < 425 and abs(notification.LateralDeviation) < 1 )
                            setprop("sim/model/f-14b/lights/acl-ready-light", 1);
                        if (notification.Distance > 8000)  # extinguish at roughly 4.5nm from fix.
                            setprop("sim/model/f-14b/lights/landing-chk-light", 1);
                            setprop("sim/model/f-14b/lights/landing-chk-light", 0);
                    # Not in range so turn it all off. 
                    # NOTE: Currently this will never be called as the AN/SPN-46 system will not notify us when we are not in range
                    #       It is implemented here for completeness and to do the correct thing if the implemntation changes
                    setprop("sim/model/f-14b/instrumentation/nav/gs-in-range", 0);
                    setprop("sim/model/f-14b/instrumentation/nav/gs-distance", -1000000);
                    setprop("sim/model/f-14b/lights/landing-chk-light", 0);
                    setprop("sim/model/f-14b/lights/acl-ready-light", 0);
                return emesary.Transmitter.ReceiptStatus_OK;
            return emesary.Transmitter.ReceiptStatus_NotProcessed;
        new_class.Response ="ARA-63");
        return new_class;

Having made the class we now need to insert the following code to actually instantiate the receiver.

# Instantiate ARA 63 receiver. This will work when approaching any
# carrier that has an active AN/SPN-46 transmitting.
# The ARA-63 is a Precision Approach Landing system that is fitted to all US
# carriers.
var ara63 ="ARA-63");

Lastly there needs to be a way to turn everything off when the carrier is detuned or out of range. This requires a seperate method called from the update loop as follows.

# Update the ARA-63; this doess two things - firstly to extinguish the
# lights if the validity period expires, and secondly to use the civilian ILS
# if present. This needs to be called by the main aircraft loop
# NOTE: this is necessary because by design the AN/SPN-46 does not transmit
# to receivers that aren't tuned, or when out of range so a method to reset 
# indications is needed.
var ara_63_update = func
# do not do anything whilst the AN/SPN 46 is within expiry time. 
    if(getprop("/sim/time/elapsed-sec") < ara63.ansn46_expiry)
#  Out of range so set everything off
    setprop("sim/model/f-14b/lights/landing-chk-light", 0);
    setprop("sim/model/f-14b/lights/acl-ready-light", 0);
# Ascertain if the civilian ILS is within range and use it if it is. This isn't as per
# the aircraft but IMHO it is reasonable to have this. 
# You will need to tune to the appropriate TACAN channel to get the ILS
    if (getprop("instrumentation/nav/gs-in-range") != nil)
        setprop("sim/model/f-14b/instrumentation/nav/gs-in-range", getprop("instrumentation/nav/gs-in-range"));
        setprop("sim/model/f-14b/instrumentation/nav/gs-distance", getprop("instrumentation/nav/gs-distance"));

Emesary Reference

This is the reference and concepts behind Emesary. Useful to read and understand.

Core concepts

Emesary has Transmitters, Recipients and Notifications.

Transmitters send out Notifications to all of the Recipients that they have registered. A recipient needs to only implement the Receive method and register itself with the transmitter to receive messages.

By convention all recipients must return Transmitter.ReceiptStatus_NotProcessed when they do not process a message.


A transmitter, usually emesary.GlobalTransmitter sends notifications. Each Transmitter must have an Ident.


A recipient uses emesary.Recipient as a base class and must provide a Receive method


A notification must have at least Type and a Value.

The Type is used to identify the message type and to allow recipients to decide if they can process the notification.

The Value is the most important part of the Notification. There can be other properties in the Notification, but the Value is the one that is the most important and as it is common it is the Value that can be made visible in a debug recipient.

Emesary in NASAL

NASAL doesn’t have the rigid structure that you’d find in other object oriented languages, so no interfaces etc. This actually makes the implementation smaller but the core concepts remain the same.

Creating a Transmitter

You can have as many transmitters as you like; however it is usually sufficient to have a single transmitter for all of the Nasal Modules.

For simplicity Emesary instantiates a Transmitter called GlobalTransmitter and referenced via emesary.GlobalTransmitter. The GlobalTransmitter should be used to most implementations, except where there are specialised requirements.

To create a Transmitter you simply do

    var MyTransmitter ="MyTransmitter");

Connecting to a Transmitter

There are two ways to connect to a Transmitter; if you have a Nasal class you can implement a Receive method and register with a Transmitter, or you can add a class to your Nasal module that will receive notifications and act upon them.

Connecting to a class

In both of these examples the Recipient is registered with GlobalTransmitter at the time of creation. This is usually what you would want to do, however you don’t need to register during the new method and you can also choose to instantiate your own Transmitter and register with that. Although generally it is better to use the GlobalTransmitter and only create a Transmitter for a good reason.

If you don’t already have a base class then create your new class like this

   new: func(_ident)
        var new_class =;
        new_class.Receive = func(notification)
            if (notification.Type == "SomeNotificationType")
                me.count = me.count + 1;
                return emesary.Transmitter.ReceiptStatus_OK;
            return emesary.Transmitter.ReceiptStatus_NotProcessed;
        # the rest

If you already have a base class then use the Recipient construct method to subclass and implement the Receive method.

    new: func(_ident)
        var obj = {parents : [SomeClass] };
        emesary.Recipient.construct(_ident, obj);
        obj.Receive = func(notification)
            if (notification.Type == "SomeNotificationType")
                # Do some work
                return emesary.Transmitter.ReceiptStatus_OK;
            return emesary.Transmitter.ReceiptStatus_NotProcessed;

Adding a recipient to a module

If you’ve got a module (filename.nas) that just has methods and data then

var MyRecipient =
    new: func(_ident)
        var new_class =;
        new_class.count = 0;
        new_class.Receive = func(notification)
            if (notification.Type == "SomeNotificationType")
                # Do some work
                return emesary.Transmitter.ReceiptStatus_OK;
            return emesary.Transmitter.ReceiptStatus_NotProcessed;
        return new_class;


The ReceiptStatus return value from the Receive method is an important part of the way that Emesary works.

The rules are

  1. When a notification is not processed by your Receive method you must return emesary.Transmitter.ReceiptStatus_NotProcessed
  2. When a notification is processed succesfully you should return either emesary.Transmitter.ReceiptStatus_OK or emesary.Transmitter.ReceiptStatus_Fail
  3. When you have definitively processed a notification you can return emesary.Transmitter.ReceiptStatus_Finished. A definitive process return will result in no more Recipients being notified.
  4. When you have definitively processed a notification as not possible you can return emesary.Transmitter.ReceiptStatus_Abort. A definitive process return will result in no more Recipients being notified.

A definitive process return will result in no more Recipients being notified. An example of this is when the notification is a request to show the user a message. The message only needs to be shown once so returning emesary.Transmitter.ReceiptStatus_Finished tells Emesary to notify no more recipients.

An example of emesary.Transmitter.ReceiptStatus_Abort would be a request that cannot be fulfilled, for example raising the landing gear whilst on the ground. If you ensure that the first Recipient added to the Transmitter returns Abort when on the ground then it will avoid the need to check this condition in any other Recipients.

Send a notification

To send a Notification simply construct a new Notification (or modify the parameters of an already constructed notification to avoid GC) and call GlobalTransmitter.NotifyAll(notification)

This could look like this


You can inspect the value of the

NotifyAll return value

Notify all will return a ReceiptStatus that indicates the overall completion status. This is based on the return value from all recipients.

  1. If all Recipients do not process the message the status will be emesary.Transmitter.ReceiptStatus_NotProcessed
  2. If all Recipients return OK the status will be emesary.Transmitter.ReceiptStatus_OK
  3. If at least one Recipient returns Fail the status will be emesary.Transmitter.ReceiptStatus_Fail

The idea behind the return value of NotifyAll is to allow the sender to take an appropriate action; so if the message wasn’t processed at all then maybe show a message, equally if the return value was fail then a differnt message could be shown or maybe a direct action taken.

Example of handling not processed return

    var landing_gear_pos = switch.getValue();
    var lg_switch =;
    if (emesary.GlobalTransmitter.NotifyAll(lg_switch) == emesary.Transmitter.ReceiptStatus_NotProcessed)
        # Not processed so do it ourselves
        setprop("/controls/gear/gear-down", lg_switch.Value);

Example of handling fail return

    var landing_gear_pos = switch.getValue();
    var lg_switch =;
    if (Transmitter.IsFail(emesary.GlobalTransmitter.NotifyAll(lg_switch)))
        displayMessage("Landing gear cannot be moved");

Difference between Emesary and Listeners

Listeners can be used to achieve the same effect as using Emesary, except using Emesary gives you more control over how the message is handled and is less resource intensive. So if you have a multiple listeners on a property it may be necessary to have similar logic on each element.

The other benefit is decoupling. You can have a standard module that performs certain processing when it receives a notification, but often the property that fires this differs between aircraft models. So in this case you can set a listener that will send a notification via the GlobalTransmitter and the standard module will perform the actions without having to know the property that initiates the action.

Using Notifications to return information

It is possible and often very useful to add extra elements into a Notification that can be filled in by the Recipient. An example of this could be getting the current list of available radar returns for TCAS.

    var RadarReturn =
        new: func(_callsign, _coord, _hasradar)
            var new_class = { parents: [RadarReturn]};
            new_class.Callsign = _callsign;
            new_class.Position = _coord;
            new_class.HasRadar = _hasradar;
            return new_class;
    var RadarReturnsNotification =
        new: func(_value)
            var new_class ="RadarReturnNotification", _value);
            new_class.Returns = [];
            return new_class;
        AddReturn: func(radar_return)
            append(me.Returns, radar_return);

and in the Radar processing code

pre. new_class.Receive = func(notification) { if (notification.Type == “RadarReturnsNotification”) { foreach( rr; returns_list ) { notification.AddReturn(rr); # or possibly notification.AddReturn(, rr.get_position(), rr.get_has_radar()); } # Do some work return emesary.Transmitter.ReceiptStatus_Finished; } return emesary.Transmitter.ReceiptStatus_NotProcessed; };

Emesary as a scheduler

Bear in mind that Nasal is not suited to anything related to flight dynamics, and what I’m presenting here as a scheduler is not intended to be used to implement complex dynamics, or at least not until HLA makes it possible to run Nasal code at the same rate as the flight dynamics.

Usually you will have at least one timer inside the Nasal for any given aircraft to perform frequent processing. Sometimes more than one timer is necessary for different rates, and sometimes this will introduce extra dependencies.

Using Emesary you can set a timer that will send out a Notification that will allow all of the registered recipients to perform their required processing. You can obviously send out a less frequent message (for example in only 1 frame out of 4, or every second) for less important processing. The important thing here is that the sending of the notification and therefore the update frequency is only in one place.

So the updateFCS method that usually calls lots of dependent modules can be replaced by the following code.

    var FrameNotification = 
        new: func(_rate)
            var new_class ="FrameNotification", _rate);
            new_class.Rate = _rate;
            new_class.FrameRate = 60;
            new_class.FrameCount = 0;
            new_class.ElapsedSeconds = 0;
            return new_class;
    var frameNotification =;
    var rtExec_loop = func
        var frame_rate = getprop("/sim/frame-rate");
        var elapsed_seconds = getprop("/sim/time/elapsed-sec");
    # you can put commonly accessed properties inside the message to improve performance.
        frameNotification.FrameRate = frame_rate;
        frameNotification.ElapsedSeconds = elapsed_seconds;
        frameNotification.CurrentIAS = getprop("velocities/airspeed-kt");
        frameNotification.CurrentMach = getprop("velocities/mach");
        frameNotification.CurrentAlt = getprop("position/altitude-ft"); = getprop("gear/gear[1]/wow") or getprop("gear/gear[2]/wow");
        frameNotification.Alpha = getprop("orientation/alpha-indicated-deg");
        frameNotification.Throttle = getprop("controls/engines/engine/throttle");
        frameNotification.e_trim = getprop("controls/flight/elevator-trim");
        frameNotification.deltaT = getprop ("sim/time/delta-sec");
        frameNotification.current_aileron = getprop("surface-positions/left-aileron-pos-norm");
        frameNotification.currentG = getprop ("accelerations/pilot-gdamped");
        if (frameNotification.FrameCount >= 4)
            frameNotification.FrameCount = 0;
        frameNotification.FrameCount = frameNotification.FrameCount + 1;
        settimer(rtExec_loop, 0);
    settimer(rtExec_loop, 1);

Then inside each module have a recipient that will receive the appropriate notification and perform the required processing.

For example inside our engines.nas we could do this

    var enginesRecipient ="Engines");
    enginesRecipient.Receive = func(notification)
        if (notification.Type == "FrameNotification" and notification.FrameCount == 2)
            #print("recv: ",notification.Type, " ", notification.ElapsedSeconds);
            if (APCengaged.getBoolValue())
    		    if ( wow or !getprop("engines/engine[0]/running") or !getprop("engines/engine[1]/running"))
            return emesary.Transmitter.ReceiptStatus_OK;
        return emesary.Transmitter.ReceiptStatus_NotProcessed; # we're not processing it, just looking


The most common errors are;

  1. Not registering your recipient with the transmitter that is sending the notifications. This is probably the most common mistake #1.
  2. Check that the notification type is correctly spelt
  3. Not returning the appropriate receipt status.
  4. Not sending the right message
  5. Another recipient returning Finished or Aborted when it shouldn’t


Debugging is easy; just add a recipient that will print out the contents of the message.

    var debugRecipient ="Debug");
    debugRecipient.Receive = func(notification)
        if (notification.Type != "FrameNotification")
            print("recv: ",notification.Type, " ", notification.Value);
        return emesary.Transmitter.ReceiptStatus_NotProcessed; # we're not processing it, just looking

Other Notes

an_spn_46.nas12.91 KB
emesary.nas6.28 KB
emesary-tests.nas5.39 KB
eisenhower.xml59.33 KB
nimitz.xml74.33 KB
F-14-ARA-63-implementation.nas7.25 KB

Emesary: Multiplayer bridge for FlightGear


The multiplayer bridge allows notifications to be routed over MP. The model creates an incoming bridge specifying the notifications that are to be received and the bridge will messages from multiplayer models.

The elegance of the bridge is that neither the sender nor the receiver need to know about each other; all notifications just appear in the recipient method where they can be handled. Each aircraft would have one (or more recipients) and just handle the incoming message.


Normally an aircraft model will create both an incoming and outgoing bridge, although some aircraft may only wish to listen or transmit.

Creating an outgoing bridge

All you need is a list of notifications to route, This is a list of preconstructed notifications that are necessary to access the encode / decode methonds.

var routedNotifications = [];

then use the following call. the parameters are as folows:

  1. ident, used to identify the bridge
  2. list of notifictions to route. Often, but not always the same as the incoming notifications to route

By default the outgoing bridge will attach itself to emesary.GlobalTransmitter, but in more advanced usage it is possible to attach to any transmitter.

   var outgoingBridge ="F-15mp",routedNotifications);

Creating an incoming bridge

Again a list of notifications to route, you can reuse the same list for both incoming and outgoing.

var routedNotifications = [];

and the following call will cause an incoming bridge for each MP aircraft to be created when each MP item connects. The design of the bridge requires each MP model to have its own bridge (to manage the message indeces)

var incomingBridge = emesary_mp_bridge.IncomingMPBridge.startMPBridge(routedNotifications);

Outgoing routing

There are two basic types of notifications that can be routed over the bridge;

  • Distinct Messages – these are messages where only the last one is important, e.g. position updates.
  • Standard messages – message that relate to an action or event that is unique, such as jettison of tanks. All of these messages will be routed.

A model that wishes to route notifications will need to create an outgoing bridge using the method

Incoming routing

Each model specifies a list of messages that it is interested in and capable of handling to the method emesary_mp_bridge.IncomingMPBridge.startMPBridge


The nasal module that declares notifications to be routed over a bridge needs to be included in both the outgoing and the incoming model.

A notification needs to specify methods that to encode and decode the parts of the notification that need to be sent over the bridge. The bridge will itself take care of calling these methods at the right time.

The properties will be packed up and sent over by the bridge using the specified multiplayer generic string. There is a limit of the size of multiplayer packets that can be transmitted so it is important to only send the absolute minimum firstly by good design and secondly by choosing the right encoding, bytes are best.

Standard GeoEventNotification

There is a new standard notification that is part of the distribution to notify of something that happens at a specific geographic location. This could be many things (there is a list inside Nasal\notification.nas). An example could be a cargo drop using the 1000kg classification. Each event sent will be one of the following, Created, Moved, Deleted. Upon receipt the model can perform the appropriate action.
I’ve got a working example of (what I’ve called) a GeoEventNotification. This is basically something that happened at a position.

So to notify of droptanks

   var m =, "DT-610", 1, 48 + pylon_index);

This will usually be received by the player’s aircraft over MP. The magic numbers in the above are documented inside notification.nas

Handling bridged messages

Each bridged message is received in the normal way, usually via the GlobalTransmitter, the only difference is that bridged messages are marked as being bridged so that an outgoing bridge knows not to route it out again.

Example recipient

var EmesaryRecipient =
    new: func(_ident)
        var new_class =;
        new_class.ansn46_expiry = 0;
        new_class.Receive = func(notification)
            if (notification.NotificationType == "GeoEventNotification")
                print("received GeoNotification from ",notification.Callsign);
                print ("  pos=",,notification.Position.lon(),notification.Position.alt());
                print ("  kind=",notification.Kind, " skind=",notification.SecondaryKind);
                    if(notification.Kind == 1)# created
                        if(notification.SecondaryKind >= 48 and notification.SecondaryKind <= 63)
                            # TBD: animate drop tanks
                return emesary.Transmitter.ReceiptStatus_OK;
            return emesary.Transmitter.ReceiptStatus_NotProcessed;
        new_class.Response ="ARA-63");
        return new_class;
# Instantiate receiver. 
var recipient ="F-15-recipient");


In summary, an aircraft sends a notification and all other MP aircraft can receive it (if they want to). This makes inter-aircraft communication possible and with a nicely structured API. It’s also worth noting that because Emesary is doing all of the work if there is a future improvement then your code will not need to change to take advantage of it.

This will allow a set of standard modules to animate items such as droptanks falling, pilots ejecting, etc. Once there is a standard module for something it can simply be included into normal module load.

Emesary: PHP example of class based inter-object communication

To take a high level view, any system is built from a collection of parts - some of these parts need to perform a required job (the primary purpose) and the parts will need to interface to each other.

Taking a traditional three tier system as an example:

The database tier needs a defined API, after all accessing the database is the primary purpose.
This is where a set of classes with methods and properties is required.
During the operation of the database tier it may cause exceptions or failures to be raised. Also the database may need to communicate externally to any number of services that are not necessarily known at build time.

Building on a scheme for communicating simply between objects, I've been asked to provide an example to clarify things for those of us who read code better than we read prose.

Classifying the messages

Firstly what we will need to do is to setup a small class that really is a list of constants - this is purely to aid with marshalling messages. By grouping messages it is easy to quickly filter based on what the message recipients declare as being their interests.

class SystemIdent
    const Database         = 0x1000;
    const Audit            = 0x1001;
    const Exec             = 0x1002;
    const Authentication   = 0x1003;

Defining the message base class

By design all messages sent via the event bus need to be derived from the Message class. This allows a reasonable amount of sanity checking, and forces a structure onto the use of this system. Experiments have been made in using more anonymous messages, however the lack of embedded message types and system idents preclude the use of any intelligent routing or handling within the transmitter class.

class Message 
    private $message_type;
    private $system_ident; // from SystemIdent class

    static $regid = 2;
    static $reg_desc = array();

    function __construct($message_type, $system_ident)
        $this->system_ident = $system_ident;
        $this->message_type = $message_type;

    public function get_system_ident()
        return $this->system_ident;

    public function get_message_type()
        return $this->message_type;

    static function register_message_type($id)
        $reg_desc[Message::$regid] = $id;
        return Message::$regid++;

Whilst we can communicate with this message class it is more normal to derive from it to add new methods and properties that are relevant to the message being sent. Normally this is good - the only time that we need to be a little careful is to avoid embedding objects into the class - as these are hard to use with the Inter-Process bridge that I will describe later.

Receiver interface

This is the heart of the system and where the interface that all classes wishing to receive messages should implement.

interface Receiver
    const OK = 0;   			// info
    const Fail = 1; 			// if any item fails then send message may return fail
    const Abort = 2; 			// stop processing this event and fail
    const Finished = 3; 		// stop processing this event and return success
    const NotProcessed = 4; 	// recipient didn't recognise this event

    // Returns a bitmask for the message notification types which are required. This allows a fairly coarse control
    // by the transmitters of which messages to send to which recipients. Return value of zero means all types required
    function get_RequiredNotificationTypes();

    // main message receiver
    function receive($message);


Transmits Message-derived objects. Each instance of this class provides a
event databus which any number of receivers can attach to.

Messages may be inherited and customised between individual systems.

class Transmitter
    private $receiver_list = array(); // contains objects of type Recevier

    function Transmitter()

    // Registers a class to receive messsages from this transmitter. A class can be registered with any number of transmitters
    function register($receiver)
        if (!in_array($receiver, $this->receiver_list))
            $this->receiver_list [] = $receiver;

    // remove a receiver
    function de_register($receiver)
        if (in_array($receiver, $this->receiver_list))
            unset($this->receiver_list[array_search($receiver, $this->receiver_list)]);
        return 0;

     * Notifies all registered classes of the message.
    function notify_all($message)
        foreach ($this->receiver_list as $key => $value)
            $rv = $value->receive($message);
            switch ($rv)
            case Receiver::Fail;
            case Receiver::NotProcessed;
            case Receiver::OK;

            case Receiver::Abort;
                return Receiver::Fail;

            case Receiver::Finished;
                return Receiver::OK;

        return Receiver::OK;


Example of usage

This is a simple example - but based on a real world problem, and one that it solves nicely.

// this is just a stub to demonstrate the concept, it is an empty shell that would
// normally be connected to a database table.
class CbfDbEntity
    var $fields = array();

    function CbfDbEntity($table, $primary_key)

    function write_record()
        foreach ($this->fields as $k => $v)
            echo (" $k => $v,");
        echo "\n";

    function read_record()

    function set_field($f, $v)


// class to demonstrate a Receiver. The design of the event bus is such that a class will implement the receiver interface
// which nicely allows the class to receive messages, without disrupting its real job.
// By implementing the interface and registering with the global event bus this class will receive all messages sent on the
// bus. It is upto the class to determine the required filtering and which messages to act upon. This presents a nice design
// that is easily extended.
class Mrt extends CbfDbEntity implements Receiver
    var $id;

    function Mrt($id)

        $this->id = $id;

    function get_RequiredNotificationTypes(){ return 0; }

    function receive($message)
        print "rcv: {$this->id} : ty:{$message->get_message_type()} sys:{$message->get_system_ident()}\n";
        if ($message->get_message_type() == AuditLogMessage::$id)
            print "AuditLog Message\n";

        return 0;


// A specifc message - based on the Message object - but with extra records. 
class AuditlogMessage extends Message
    var $type, $action, $additional, $user;
    static $id=-22;

    function AuditlogMessage($type, $action, $additional, $user="")
        parent::Message(AuditLogMessage::$id, SystemIdent::Audit);

        $this->type = $type;
        $this->action = $action;
        $this->additional = $additional;
        $this->user = $user;
AuditLogMessage::$id = Message::register_message_type("AuditLog"); // register and get a unique ID.

$eventBus = new Transmitter();

$mrt = new Mrt("id1");
$ddd = new Mrt("ddd");

echo "-- Test notification with two recipients\n";

$eventBus->notify_all(new Message(0,SystemIdent::Exec));

echo "\n-- Test notification with one recipient\n";
$eventBus->notify_all(new Message(1,SystemIdent::Exec));

echo "\n-- Test nofication of AuditLogMessage with one recipient\n";
$msg = new AuditLogMessage("Intrusion","Logon","Additional info");
$eventBus->notify_all(new AuditLogMessage("Intrusion","Logon","Additional info"));

Event intercommunication source file download

Download source code for Event intercommunication example

Flash always ontop of page content.

I had this problem today with Flash (SWF) content appearing ontop of a JTip popup. I tried changing the z-index within the CSS, but it didn't work, it took me a long time to discover what needed to be done to fix it, but basically it was quite easy;

1. Add the following parameter to the OBJECT tag:

<param name="wmode" value="transparent">

2. Add the following parameter to the EMBED tag:



FlightGear: Using canvas in a 3d instrument


Currently the usual way of making an instrument in FlightGear is to draw a 3D model, texture it appropriate and animate the individual elements. For needles this usually means a rotation, and for drums a texture map translation. This workflow is well understood and works really well and what I’m presenting here isn’t intended to replace this.

Canvas allows us to render a 2D element onto a 3D quad. So what we can do is to draw the instrument in SVG (using Inkscape) and then just animate the elements.

Supporting classes

I’ve written canvas_instrument.nas which provides the basic building blocks for a single instrument; together with canvas_altimeter.nas as an example. All files are attached to this post

Implementation Steps

Step 1 – prepare the 3d Model

Create an instrument that is just the box, face and controls. this could look like this.

By convention the 2d quad for canvas must be named CanvasInstrumentFace, so the altimeter would be CanvasAltimeterFace.

The XML file that goes along with this will only have the usual lighting animations.

Step 2 – Create an SVG file

Create an SVG file. Ensure that the elements that are to be animated are in identifiable SVG elements (groups, layers) as these will need to be located within the SVG file once loaded.

This could look something like this:

Step 3 – add the Nasal classes

Modify the aircraft -set file to include canvas_instrument.nas and if you’re using this example also canvas_altimeter.nas

I put mine in the aircraft object like this:


Step 4 – implement your instrument animations

The canvas_altimeter.nas demonstrates how to create an instrument. Basically most of the work is performed in the base class (CanvasInstrument), and all that is needed is a few lines to determine the specifics.

CanvasInstrument constructor has the following parameters:

  1. SVG filepath
  2. Instrument Name
  3. x translation
  4. y translation.
# Subclass the canvas instrument to create an altimeter. This is how all instruments should be done.
# The update_items property is required to handle the update of items that are animated, using
# the PropertyUpdateManager class to manage this (to optimise performance)
var CanvasAltimeter =
	new : func 
	var obj ="Models/Cockpit/Instruments/ALT.svg", "Altimeter", 0,20);
        obj.hundreds = obj.get_element("100");
        obj.thousands = obj.get_element("1000");
        obj.tenthousands = obj.get_element("10000");
#        obj.canvas.setColorBackground(0.36, 1, 0.3, 0.00);
        obj.update_items = [
  "instrumentation/altimeter/indicated-altitude-ft", 1.0, func(alt_feet)
aircraft.alt =;

NOTE: we are using the PropertyUpdateManager in this class to ensure best performance.

Each element that needs animation should have its own PropertyUpdateManager added to the update_items.


This is a simple class that will call the code that it is given only when the property changes by the defined amount.

It takes the following parameters:

  1. Property name (string)
  2. amount the property must change by to cause an update
  3. func(v) that will do the update. the current value is passed into the func.

So to add the barometric pressure drum animation we would add a second element to the update_items, example as below.

       obj.update_items = [
  "instrumentation/altimeter/indicated-altitude-ft", 1.0, func(alt_feet)
  "instrumentation/backup-altimeter/setting-inhg", 0.01, func(alt_feet)
canvas_instrument.nas4.4 KB
canvas_altimeter.nas1.28 KB
ALT.svg32.59 KB
altimeter.xml3.99 KB
altimeter.ac21.92 KB

Flightgear Rembrandt deferred rendering performance


The deferred configurable pipeline has great possibilities; but poor performance.

ALS works exceptionally well and ThorstenR has done amazing stuff with it; and not burnt frame rate at all. Rembrandt shadows are too flickery and a shadow map as Thorsten has added to ALS is arguably2 a much better way than the way Rembrandt does it. What I personally like about rembrandt is the ability to have light sources.

Long term I don’t know what’s right; and it’s not really up to me anyway; so I’m just presenting my findings.

Personally there are two reasons that I’m investigating this; the first is to figure out the ability to have post processing effects that aren’t possible with forward rendering. The second reason is the performance; it’s long been recognized that deferred pipeline is slow; I would expect a performance hit – but it seems to be disproportionate. I have a reasonable system and over the last few months I’ve changed graphics cards (GTX460 to R9 290); and what struck me as strange was that changing graphics cards didn’t really same to make much difference to the frame rate. Looking at the card from a GPU monitor (MSI Afterburner) I see a GPU activity of around 1% – so something is obviously wrong.

The background to this work required a lot of studying of the code, configuration and notes1;

I’m using ALS (with FG 3.5(git)) as a comparison.

To gain a reasonable baseline I used the same in-air initial position and then removed something from the deferred pipeline and measured it. All measurements were taken with the same osg debug elements on screen.

Step 1; modify default-pipeline.xml

I started by removing bloom, ambient occlusion, but no difference. I then fiddled about with the texture buffers (just in case it something was stalling the GPU because the buffers weren’t in the right format. I’m not a glsl / gpu expert but I do understand enough of how graphics is done to know that this is often the cause. In this case it wasn’t.

Next more of the stages had to go; I also removed all of the unnecessary items from the pipeline (conditions, 16bit buffers etc) just in case these were affecting the OSG processing. This made no difference.

So I’ve continued up until the point where there was almost nothing left. Referring to figure(1) below you can see that I’ve reduced the number of stages in the deferred pipeline to just 3; and yet each stage is taking much longer than the equivalent in forward rendering.

At this point I was confused; something was clearly odd and usually you can gain performance improvements by hacking out half of it but this wasn’t producing any noticeable differences.


Step 2 OSG Multithreading

I tried each of the possible options and although things improved slightly the change was consistent between both the forward and deferred pipeline. Effectively no magic solution was found here (I wasn’t expecting one after some study of the OSG documentation and lists).

Step 3 Adjust the C++

So I’ve got a cut down three stage deferred pipeline; now I need to look at the code.

As it’s looking like it’s not the number of stages, and with a rendering pipeline that is similar to the forward one then I should be getting the same rate (I’m not); if not then it can’t be the GPU, or rather it could be, but before coming back to this I first needed to remove anything that looked possibly unnecessary. For example a massive inefficiency or thread locking wait state; having removed pretty much everything that wasn’t essential

So I removed everything that wasn’t essential; conditions, accesses to the property tree; even the odd cull visitor; but nothing made a difference to the performance.

Step 4; Shaders

As the deferred pipeline uses a different set of shaders I went through each one and checked it for anything that was wrong. I wasn’t expecting to find anything partly because I don’t really properly understand shaders; but it all looked ok. So I then removed the shaders completely from the pipeline (i.e. I didn’t understand them so lets get rid of them totally and see what happens).

What happened was odd; I got a black screen but the performance was about the same.

Step 5; Running out of ideas.

At this point I’ve got a pipeline that has the bare minimum, nothing really being done in the shaders, and the C++ cut down to the bare minimum. At this point with the black screen apart from the OSG stats I get that awful feeling that I’m missing something obvious and looking in the wrong place. But what else was there.

By this point I’d found sim/rendering/draw-mask so I turned everything off; which left the skydome; and there at the top of the screen was a (pretty much constant) 30fps. Refer to Figure(2) below (although the FPS is lower because of the on screen OSG statistics).

How could it be that drawing effectively nothing is giving me a 30hz frame rate. It should be 60hz because that’s my monitor refresh rate. Then it hit me like a hungry tv presenter – this has to be related to vsync. Somehow it has to be.

Figure(2) Drawing nothing is taking a long time

Step 6; vsync investigations;

What I’m looking for now is something that is causing each camera to wait for vsync; as I’m 99.9% certain that this is why it’s slow.

So the first logical thing to do was to figure if vsync was turned on; so I had to figure out what it was called in OSG and how to set it. Once I’d done this I found this out; I then studiously went through the code and figured out where to add it to the init() and in a “we already thought of that” way it was already there.

So I changed the command line to have —prop:/sim/rendering/vsync-enable=false —prop:/sim/frame-rate-throttle-hz=60

The results in figure(3) show that indeed there is a wait for vsync per camera in OSG; now I’m not sure this is right and will probably ask the question in the OSG lists; but this is progress.

Figure(3) Drawing nothing is fast


With vsync turned off I’m getting what I think is a reasonable performance. My GPU is showing way more activity and the cooling fan comes on now (before these changes GPU activity was pretty much 1%)

Rather like a poorly written murder mystery the clues were there all; the frame time on the cameras in figures(1) and (2) is very close to 16.67ms (vysnc period). I should have realised this earlier; but I didn’t.

Now I’m not totally sure if this is the right solution or even if this will apply to other graphics card; but for me with shadows turned off I’m getting 42fps (whereas before I was getting around 15fps). With shadows I get around 30fps. This is good as 30fps is the minimum I can cope with (Even though I usually have the shadows turned off except for screenshots).

Next steps (for me) with deferred renderring.

Now I’m trying to figure out how to adapt the deferred rendering pipeline so I can do post processing;


1 Project Rembrandt (

2 There are many arguments about forward vs. deferred rendering and deeply held views. I take the view that we should let the user choose what they like.

flightgear-deferred-rendering-1.jpg162.52 KB
flightgear-deferred-rendering-2.jpg259.44 KB
flightgear-deferred-rendering-3.jpg86.36 KB

Generic data modelling of products in the Entity Framework (EF)

Everything is composed of something until you hit the fundamental elements of your domain. In a restaurant environment these would be what a chef would call ingredients, ie. stuff that comes from a food processing plant or a farm. However at the food processing plant these are the output rather than the input. I’ve tried to draw this below, it’s not complicated once you understand that it is a model of how things are.

So model this, using a table ProductBase

Ingredients, recipes, menus, etc. are all the same. They are items that are made up from other items, until you get to items that are generally agreed to be fundamental elements, e.g. lemons, salt, pepper, beef steak, etc.

So if you have a recipe A, that contains ingredients A1, A2 etc. then these ingredients A1,A2 could be a recipe (i.e. a list of parts that are processed to produce something), or something pre-prepared. Let the data model reflect this.

From the data model you don’t need to make distinctions between what we call recipes, ingredients, etc, I store all of these in a single products table, with ParentProductID,

The parent product is used to provide containers – a list is defined by its contents.

The Linked product is used to define the container elemenent reference, i.e. a recipe has a list of ingredients, so there is an entry of type INGREDIENT, which has LinkedProductID of the actual product used.

CREATE TABLE `productbase` ( `productid` CHAR NULL, `productname` VARCHAR NULL, `parentproductid` CHAR NULL, `linkedproductid` CHAR NULL, `categoryid` CHAR NULL, `supplierid` CHAR NULL, `type` CHAR NULL, `subtype` INT NULL DEFAULT NULL, `cost` DECIMAL NULL DEFAULT NULL, `mcu` CHAR NULL, `mcuperpack` DECIMAL NULL DEFAULT NULL, `quantity` DECIMAL NULL DEFAULT NULL, )

Within the EF the type field is used to map to individual data entities thus:

Complete data model;

Taking the following Relational Database Schema

We map this to the Entity Framework using inheritance by specialization based on field contents.

How to use alpha blending on websites to create cool looking effects

Alpha blending is now possible thanks to improved browser support for PNG images with an alpha channel.

In essence the alpha channel controls the visibility and I use this together with layering images atop each other to provide texture and colour to style page elements.

About Alpha Blending

Alpha-blending is a technique that allows images to be laid on top of each other and allow parts
of other images to show through. It is rather like using an image on transparent film -
the backgrounds can show through and the colour or brightness is adjusted according to the
degree of transparency. The alpha channel is really a form of mask,
dictating what amount of information should be allowed to show through from lower-lying graphics.

To to use Alpha Blending on a web page

It is common to have transparent images where a specific color is defined as transparent (i.e. GIF), and we've all seen and used this to allow non-rectangular images on pages, however the alpha blending is more subtle as you can use it to have a gradual blending.

The technique that I have developed to get the effects that you see is to use the background-image in the CSS file with alpha blended images - and to use these to style elements. For example rather than having to colour each image you can use the transparency to create styled glossy backgrounds that will take the CSS background-color of the underlying element.

Building up layers of images

Think of the site as being built up using layers of transparent images to obtain the colouring effects.

We will start with a fairly simple background that is has an alpha channel - the following are all the same image - the colour styling is by setting background-color.

Color #000 Color #008 Color #080 Color #400 Color #040 Color #004

Page background

The first item on the page is the background - as below. This is a lot lighter for most of the area than
it will appear on the finished page. The layers that sit ontop of it will darken it.

The menu bar

The menubar is, of course, an unordered list - using a 120x30 alpha channel png to provide coloration. The
selected item in the list has a different background - one that isn't transparent.

Combining the alpha-blended elements to provide a heading bar

The entire area is layed out by contained <divs>. Firstly we need a div with the appropriate background
colour (black in this example). This background colour will be used throughout all the div layers and provides the
basic colouring.

Next we need to add the background for the page - this provides the graded colouration that you can see

On top of this we add the heading background - in this case the picture of the Château. Notice how this takes
both the background colour and the colours from the page background

Coloring the content area.

To darken the area that we need to put text onto - to make it more readable - we use a fairly dark black
transparent image repeated over the whole area.

Object Model Design for a Reference Monitor

A reference monitor is an approach to implement a secure system based on access control. Any system can be depicted in terms of subjects, objects, an authorization database, an audit trail, and a reference monitor, as shown in Figure 1. The reference monitor is the control center that authenticates subjects and implements and enforces the security policy for every access to an object by a subject.

Figure 1: Reference Monitor

This is the basic design for the Reference monitor. It’s the bit in the middle that does all of the work. In an OS this is built deep inside, but it will work anywhere in any system.

Description of elements of the reference monitor

Element Description
Subjects Active entities, such as user processes, that gain access to information on behalf of people
Objects Passive repositories of information to be protected, such as files
Authorization database Repository for the security attributes of subjects and objects. From these attributes, the reference monitor determines what kind of access (if any) is authorized
Audit trail Record of all security-relevant events, such as access attempts, successful or not

How the Reference Monitor Enforces Security Rules

The reference monitor enforces the security policy by authorizing the creation of subjects, by granting subjects access to objects based on the information in a dynamic authorization database, and by recording events, as necessary, in the audit trail. In an ideal system, the reference monitor must meet the following three requirements:

Reference Monitor UML Object Model Diagram

Reference Monitor DB schema diagram

Background Reading

  1. Reference Monitor by Trent Jaeger
  2. OpenVMS Security Model, Chapter 2
SQL to create MySQL schema for reference monitor database.6.15 KB

Parameter Enumeration Tables.

Usually when designing a database there are a few fields in various tables that define things such as type, category, group. Normally these would be sufficiently important to justify their own entity within the database – so for example you’d have a category table. This is a standard approach and makes for a flexible system.

Sometimes this approach isn’t sufficient, and the Entity-attribute-value model is a good choice when, and only when, you have lots of possible values that could be associated with a single entity record.

I’m a big fan of the first approach, it’s simple to understand, to draw and to implement.

However on a system that I inherited there were a number of tables containing lots of fields that were enums. I knew this was wrong, and started off converting to the standard approach, however it soon became clear that this was going to result in a lot of new tables as the parameters did not fall into convenient groups (such as gender, colour, status). However each entity would almost always have a full set of these, so the EAV was not the right approach.

So, I designed another approach. Upon reflection it’s close to EAV, but much more suited to my case.

Parameter Enumeration Tables (PET)

I wanted something that would allow

So after some head scratching I came up with the following.

CREATE TABLE `params`(
    `id`         int(11) NOT NULL, /* Unique ID used to reference the field in the entity*/
    `entity`     varchar(30) ,              /* entity (table) */
    `field`      varchar(30) ,              /* table field */
    `value`      varchar(30) ,              /* value */
    `symbol`     varchar(100) default '',   /* DEFINE for use in code - not to be changed lightly */
    `definition` varchar(100) default '0',  /* Can be anything - but usually text */
    `validate`   int(2) default 1,          /* Enforce validation */
    PRIMARY KEY  (`id`)

Parameter Enumeration Tables sample schema

The following is a real example of how I used this concept to provide a booking and ordering system, including airport transfers.

The simple approach is often the best

It may seem a pretty obvious solution and that’s because it is simple. Equally I’m sure it’s not revolutionary and isn’t going to win me an award for excellent design skills. However it does free up code from a lot of binding and logic that belongs elsewhere, especially with the `Parmams Typles` (see below).

Example table contents

The following defines the permitted values for the table `transaction` field `type`. The last entry is the PHP define.

INSERT INTO `params` VALUES (190, 'transaction', 'type', 'Order',       'TRANSACTION_TYPE_ORDER');
INSERT INTO `params` VALUES (191, 'transaction', 'type', 'Payment',     'TRANSACTION_TYPE_PAYMENT');
INSERT INTO `params` VALUES (192, 'transaction', 'type', 'Refund',      'TRANSACTION_TYPE_REFUND');
INSERT INTO `params` VALUES (193, 'transaction', 'type', 'Exchange',    'TRANSACTION_TYPE_EXCHANGE');
INSERT INTO `params` VALUES (194, 'transaction', 'type', 'Reservation', 'TRANSACTION_TYPE_RESERVATION');
INSERT INTO `params` VALUES (195, 'transaction', 'type', 'Pending',     'TRANSACTION_TYPE_PENDING');

So in the database all that’s left is to add a constraint to ensure the referential integrity (see note).
alter table `transaction` add constraint `FK_transaction_type` FOREIGN KEY (`type`) REFERENCES `params`(`id`) ON DELETE CASCADE  ON UPDATE CASCADE;

NOTE: The referential integrity at the database level is flexible as it only ensure that a valid param is used – within the client code when the validate field is set then I validate the contents during the set method of a field. It’s a comprismise but it works well.

Linking together entities with PET

In PHP the way I use it is with the static method CbfParams::setup which will define constants for all of the elements in the params table based on their definition record.

Once I’ve got all of the defines it becomes a breeze to pull out related elements for certain entities; e.g.

class CbfParamsAirportIterator extends CbfParamsIterator
    function __construct()
        parent::__construct("SELECT * FROM params WHERE id IN (SELECT params1 FROM paramstuple WHERE params2=".TRANSACTION_GROUP_TRANSFERS.")"); 

Params Tuples

This is something that was originally used to solve a requirement – to be able to ensure that a combination of parameters across tables was valid. Normally this would be inline code – but this way leaves the definition in the database which is where it should be.

CREATE TABLE `paramstuple` (
    `id` int(11)  NOT NULL auto_increment,
	params1 INT NOT NULL,
	params2 INT NOT NULL,
	value VARCHAR(200) NOT NULL,
INSERT INTO `paramstuple` (`params1`, `params2`, `value`) VALUES 
/* p1 is transaction p2 is item group */
 ('10', '200', '1') 
,('11', '200', '1') 
,('12', '200', '1') 
,('13', '200', '1') 

Ok, the table is unreadable, and obviously some sort of lookup – but basically what it is doing is defining a tuple (x,y) value.

I’ve used the params tuple to do things such as defining which parts of the system can be accessed by which user level, for pricing, etc.. The beauty is that it’s a really simple way of getting a lookup table with definitions into my code in a flexible way that can let things be setup properly within the database.

Code to use the Parameter Enumeration Tables

Inside my database class I have a method that will decode the numeric value to that which is defined in the params table. This has the advantage that if need be this could be translated into other languages – whereas the awful enum approach doesn’t allow this.

    function get_field_as_text($f)
        return params_decode_id($this->get_field($f));

Validation of the PET enumeration value

During entity operations i.e. set the database layer will automatically perform validation (providing that the params definition has the `validate` attribute)

The param_validate function ensures that the table / field combination can contain the value passed – again here is another place that you can add flexibility by adding certain conditions and referring to the params tuples if needed

    function set_field($f, $v)
        if (CbfParams::is_param_field($this->table, $f))
            if(!param_validate($this->table,$f, $v))
                return cbf_error_value("EC0001","Entity: ".$this->table." - Invalid value [$v] for [$f]");

Complete code for PET

The complete PET module source code This is also interlinked with the dbentity, which you can see in the db entity source code look for ‘params’.

Quick guide to designing an Enterprise Level Application.

Is it possible to explain how to build a scalable system in less than 200 pages?

The answer is realistically no - in fact I doubt that it could be done in 200 pages, what is needed is experience and there is no substitue for it.

Firstly we need to define what makes something enterprise level, or at least enterprise class; I define it as having the following characteristics:

Follow simple rules:

1. Find out the expected number of concurrent users, then multiply this by 100 as a base part. Or you could assume that you need to be able to handle 1million to begin with to make things simple.

Then bear this in mind whilst designing the basic architecture. At this point do not put in actual technologies - don't name languages, databases or servers - it channels the thinking away.

2. Draw a basic architectural diagram; make it fit onto a sheet of A3 - ideally A4, because basic architectural diagrams should be very high level, and each box on the basic diagram is going to have another sheet and so on.

At this point identify what could limit performance. Assume nothing. Invest time in building a test harness that simulates the throughput that is required, and see how the components perform. It doesn't need to be complicated - or even too close to the final design - just a representation to gain some metrics.

3. Guiding principles.

  • Assume nothing
  • Prototype and proove everything.

Design your database to avoid the need for locking and to avoid transaction commit failures. Be sensible about database normalisation - for example making a table contain its parent key is OK and it can increase the performance considerably. Make ownership of tables clear in the design and make it match the real world as closely as possible without needing to have too many tables. Always spend time and effort on the keys, make them as natural as possible and never, ever, allow the database to generate a unique key because generated unique keys temporarily mask the problem of a design providing unique keys where required.

4. Good Database Design

  • Avoid locks
  • Use natural keys - never DB generated unique ids
  • Keep it simple - minimise the number of tables

Identify what needs to be stored in the database and it may even be sensible to have more than one database.

So now that you have the basic architectural design consider the implementation and how each component may be required to be provided by one or more distinct servers - figure out how this will be done early on - and validate this by writing some code to test it.

Ensure that as far as possible the dependence on specific technologies is minimal - e.g. ensure that the database is abstracted by a layer, and split components into tiers and peers.

Avoid dependence on specific technologies - use layers and generalised approaches.

In many ways designing an enterprise level solution based around web servers is a little easier, with load balancers able to assist, and even without load balancers it isn't hard to share out resource provision across different servers.

Finally, and possibly most easily missed, is the importance of not allowing flights of fancy to creep into the design. Be ruthless about what is required and stick to the simplest solution - even if it is a simple background process update to create a static copy of static data. Don't be afraid of having items that aren't updated immediately - identify the processes that don't need to be instantaneous and build a simple system to manage the workload.

Be ruthless about what is required and stick to the simplest solution.

So in summary, design well, assume nothing, and performance test everything to identify bottlenecks. Use a good unit testing framework.

Sample Enterprise Application Architecure Diagram

The following diagram is based on a real live application architecture. On the any item with a drop shadow is an instance of a service. This instance could be on one of many servers.


There are many protocols and transmission methods used within this architecture

Emesary is used both within code modules and via both a TCP/IP and CORBA/IIOP bridge to allow communications between servers

Random notes about what I'd do if designing a programming language

I’ve just been reading and it occurs to me that he has valid points, I’ve never really got the whole Lisp thing, and apart from my emacs init.el I’ve never written any. I guess you could say that Lisp is a darkspot in my programming language mindset, so I started thinking (which is always dangerous).

If I were to try and spec a language that I’d already like to be using I’d want something that removed implicitness (e.g. C++ pass by value of objects copying),

Any language provides a set of constructs, for
storing data,
controlling flow,
defining things
interfacing to libraries
performing mathematics

what I want is a language with minimal built ins. No pre-defined datatypes, no maths, how can we do this, how do we write the compiler. The compiler becomes a part of the language, to add a multiplication we would use a mapping from a method to an opcode.

Item storage


Item integer
storage 32; // always bits

method add shortcut + takes 1 parameter opcode(“mov.l storage, D0; adda.l parameter(1), D0; mov.l storage,D0”);

Item string
integer length;
storage dynamic;
method construct() length=1;storage.allocate(0);
method set(string v) length=v.length;storage.allocate(length);storage.copy(;

Item stream
{ receive message.output m { output(m.string); } receive message.close m { close(); }


string s(“hello world”);
stream op(stdout);
messagebus b;
message m(m.output, s);

core parts of the langauge:
lists, algorithms etc.

The list processing thing came to me in a flash whilst explaining calculus to my son.

Working with data is what we generally do, all the time, with code. There isn’t really much else to do, apart from jump about
between points in the code.

So, what we need in our new language is completely untyped data, yup, I know it sounds dangerous and possibly unfashionable
but it is really the only way to proceed. Years ago, whilst designing my event driven loosely coupled system

I had to come up with a way of moving data, any data, through messages. Couldn’t really have typed data, so I wrote a set of classes that effectively sat above the types and used a base class to access it. Completely undoing the type checking of the language. We have a DataItem class from which are derived such things as IntegerDataItem and StringDataItem (etc.).

working on two of the core parts (messaging)

On the other hand our maybe classes should typed, and type checked.

a list of data d=(1 2 3 4 5 6 7) a function f(x) = (+ 1) f(d) = (2 3 4 5 6 7 8)

Removal Of Control in object oriented systems

Removal Of Control is a mechanism that will improve the usability, reliability and readability of code. I use commonly, it's similar in what I'm achieving but very different to IoC. The defining difference for me with ROC is:

Removal Of Control allows objects to be less tangled.

Introduction to ROC

Years ago I became frustrated with the amount of wasted effort when connecting code together. Sure if you're an electrician you've got to put those cables in, but I don't want to waste my time manually connecting everything together, and yet code need to communicate.

This isn't new and the idea / inspiration came from working with Event Driven systems, in my case GUI and others. Most GUI Event handling is appallingly badly designed and fragmented beyond belief because of the inherent complexity, and this is something that we must seek to avoid

ROC is a design based on a loosely coupled event driven methodology.

To give an example I'm processing orders in the backend and a possibly recoverable error occurs, the question is how to handle this, it's windows so I do the following in the backend. I know it's a bit nasty, but I can't do it with a return value, or an exception, and besides it'll not happen often so this is the easiest way.

class OrderProcess
    int  AddTransaction()
        if (IDTRYAGAIN == MessageBox(
                (LPCWSTR)L"Transaction aborted\nDo you want to try again?",
                (LPCWSTR)L"Account Details",

My order processing can now continue and retry the database operation. Trouble is that I've just linked my backend to the GUI directly. This is a dependency that I think everyone will agree needs to be removed.

So version 2 of the above is to provide an object to my database class to handle these errors, this is much better as I can inject this in during creation and do the following which is better isolated and at least we've got the direct dependency between the GUI and the backend removed which is a good thing.

class OrderProcess
    ErrorHandler errorHandler;
    void SetErrorHandler(GUIClass externalHandler)
        this.errorHandler = externalHandler;

    int  AddTransaction()
        if (ErrorHandler.Retry ==
            this.errorHandler.NotifyFailure("Transaction aborted\nDo you want to try again?",
                                            "Account Details",
                                            ErrorHandler.Warning + ErrorHandler.Retry + ErrorHandler.Cancel)

This still doesn't sit right with me though, it's very rigid and really the dependency has been simply moved, and the injection has become another objects responsibility.

The solution to dependencies

The solution is much simpler both in concept and implementation. The OrderProcessing code knows that a problem has occurred and that it needs to get some guidance from outside, so it asks for help. As it's inherited from GlobalRecipient the object has a Notify method available to call that will globally notify all recipients likewise inherited.

class OrderProcess : public GlobalRecipient
    int  AddTransaction()
        MessageNotificationCard card("Transaction aborted\nDo you want to try again?",
                                     "Account Details",
                                      MessageNotificationCard.Warning + MessageNotificationCard.Retry + MessageNotificationCard.Cancel);

        if (Recipient.Abort == Notify(card))

That's it - a few lines of code and the OrderProcessing module can do its job without any dependencies.

Obviously that's a ridiculously simple example and a lot of design complexity goes into the Notifications, but the principle remains the same for lots of things.

Embedding objects within notifications

This is getting to be the core of the advancement of ROC - the ability to notify objects of requests and embed another object that should help to service the request. Take the case of saving the open documents to disk. The objects that provide each individual view know what their data is. So when you hit the save button in the main window you don't want to be doing this

void save(FileStream f)

It's horrible because the main window has to maintain a list of all the views that have been created. Any view created outside of the main view will need to be managed by the view that created it. This can lead to a lot of manually maintained wiring to achieve what is a fundamentally simple request. The solution is to decouple it thus

void save(FileStream f)
    Notify( SaveChangesNotificationCard(f) );

That's all there is too it. All objects that are inherited from, or registered with the GlobalRecipient (manager) will receive the card, and those that have data to save will handle it in their receiveNotification method

Underlying Principles to ROC

The underlying principles can be summarised.


The biggest benefit is reusability. I know this is almost the holy grail of design and so many things claim that this concept or idea or framework or programming language will achieve reusablility.

Resuability claims are too often like washing powders that claim to wash whiter than white. They can't because it's impossible.

Even so, and given that only very little is truly ever truly reusable ROC is a very large step on the way to achieving well structured, efficient code, that is more likely to be reused.

Dependency Notification

Part of ROC is Dependency Notification. DN is an alternative to Dependency Injection. It needs more explanation, but it's basically intetwined with ROC and lets objects take notice of the elements that they need. The basic principle is that something important such as the main entry point will create something equally important such as a database connection. All of the underlying objects need this connection to work, so the the database connection is either a global, a class static, or via DI.

This whole approach still falls apart because there is too much wiring. So what we do is to package up a Notification that contains the database connection and send it off. Any object that needs to know about the database connection can then set itself up to use the connection.

The major benefit here is that this technique removes the need for any direct calls or links between objects. It means it's very easy to change a database connection and that only the objects that need to know about a database connection are actually going to process the notification. One section of code to handle it and it's job done already. This isn't about abstraction and layers, the dependency being notified could be a DAO object or it could be a generic connection that hides an ORM beneath a nicely abstracted interface. None of this actually matters to this concept, and it will work with any of it.


ROC is simple to understand, gives real benefits, and works in pretty much any objected oriented language that supports either interfaces or multiple inheritance.

Rewriting working code

I was recently asked to advise on converting a site that was written in ASP to PHP, Ruby on Rails, or ASP.NET. Sounded like a good job because the specification is the existing site and the only requirement is that the new system must run on a technology “that is supported” and has a future roadmap. So all I have to do is pick one, and get converting, writing lots of lovely code to replace the fully debugged, live, tested, operational site.

Anyone who hasn’t read Joel’s excellent (and largely spot on) “things you should never do part 1” should read it here;

To summarise:

If a site is working, supported (by a good developer), largely meeting client expectations then it needs not to be rewritten. It needs to be maintained and migrated within the technology stack that it is currently using.

Going back to the original question; I simply asked the client a version of the above questions, I feel that they had been badly advised as it turns out that they had been told that ASP was old and unsupported. ASP is old, and I did never like it much in the first place, but it does the job and more importantly Microsoft still support it I’m a big fan of opensource, and of things like Drupal, PHP, (but not so much MySQL as I like databases), but the simple fact is that Microsoft support is more consistent, and they have a good record of continuing support (service packs for XP for example).

So overall my advice to the client was to evolve what they’ve got within ASP and transition to ASP.NET (MVC / Razor) as a planned process before 2018.

I was still asked to quote; so I worked on my very rough guide of 1 minute per line to convert. This equated to 445 hours. I argued that is a good chunk of a migration within the existing framework.

At this point in my story the whole project somehow became very political. I’m not exactly sure of the details but my usual project manager presented this to the board (small company) and got a very negative reaction. The PM was fairly convinced that the director championing this whole project had been badly advised by one of his associates/friends/advisors. In the end the PM managed to convince the board to keep with their current system, using their current developer and plan to be migrated by 2018; simply on the basis that it would be a waste of investment to fix something that wasn’t broken and that as a board guiding the company it is their responsibility to advise on the company’s best interests. I suspect I don’t express that as well as the PM did;

Of course I didn’t get paid for any of this valuable advice, and maybe one day something will come of it; but I prefer to be honest and upfront and say when something doesn’t need doing.

Taking over someone else's code

There will always come a point in any software system where the original developers move on and yet someone has to maintain what has been written. In an ideal world the system would be well documented, with pictures covering the high-level design, low level documentation is a waste – we can all figure that out from the code.

Rarely is ther any documentation and even worse the code can often be appallingly badly written, not following naming conventions, quick hacks left in.

Always aim to understand the minimum possible, and assume that the rest of the code is working as it should (even if it isn’t – it’s the mental process that is important).

Often I’ve had to take over someone else’s code – that’s not too bad, fitting into a team and taking over is fine – the support of the team to guide you in the early stages, discussing what should be done.

Try to understand the big stuff first – how each part of the code hangs together and what is being achieved at a high level, then work downwards.

Sooner or later a large system will become your responsibility. and it’s time to learn it. This process can take months, depending on the complexity even a year. The goal and the only way to approach it is to be fully conversant by the end of the process, sometimes even understanding the way that the system is constructed better than the original designer.

Being a junior won’t protect you from this. I’m speaking from experience as my first project at my first job was to take over and maintain a system that was 6 inches high when printed and contained 200 modules.

Many systems are not designed, or at least start off with a design, but by pressure, ignorance or ineptitude evolve, sometimes, and all too often into a sprawling mass of code. We have to understand this, and in the first instance make sure that any new pieces are built sympathetically, to the current system and also mindful of where the code and system should end up at.

After months of effort usually you can take an existing system and repair the design
the implementation and most of the code. Almost always you end up with a something that is similar rather than different.

When working with production code the first thing is check the current codebase built and working – and verify that it is as per the production system.

All too often programmers fail to comprehend any given piece of code, and re-write it, rather than taking the time to figure it out, on the assumption that some amount of design went into it.

Rewriting existing code is dangerous, throwing away code without understanding it is also dangerous. Much better to take longer and understand what is going on, and then fix it.

After time has passed, and changes have been made you will have adopted the orphan code – lovingly nurtured parts of it back to live and genuinely had to remove the rotten parts. The most useful comments in any piece of code tend to be the revision history – that way with a quick look I get to know the original developers and how good they were. Sometimes just seeing who wrote a module is enough to convince you that it probably would never have worked – and should be re-written.

A large sheet of paper and a pack of 10cm^2 post-it notes always help me along. Each post-it is a component of the system, and I can move them to aid understanding.

The danger of the using words new latest and old updated newer in any names

The danger of using the words that relate to time and freshness in names, such as old,newer,latest and updated.

You can't do it. if you ever find yourself including the word new,old,updated,modified etc in a file name stop immediately.

Use dates, revision levels, phase of the moon, etc.

To give a real world example of a set of filenames that I had to figure out on a live site:

"LatestProcedures updated in version).sql"

"LatestProcedures updated in modified old version).sql"

How are those two files different? After much time with windiff I came to the conclusion that the first one is the version after the one in the latest procedures, but before the updated base db version, and after the updated old pre-changes newer version....

Maybe someone should we patent this as a new, foolproof method of version control, it's so much more explanatory than revision 10 being after revision 9 and before revision 11.... At least this way we know what it really might possibly be.

Before you run off and delete your version control systems I found that the live site was contained within the following directory "C:\DevSQPatchLBackup\"

All very scary - and all indicates why version control is essential from day 1. Before you write any code or design setup a new folder to hold it, and then checkin at least daily.

The dangers of global variables revisited because of PHP.

In 1985 after about two years of full time programming I had an epiphany; realising that global variables are a very bad idea. Roughly 20% of the bugs that I kept finding were down to abuse or misuse of global variables, so I stopped using them, apart from the time when you really should use one (e.g. for a pointer to global shared memory that never changes).

So here I am over 20 years later, and I've just realised exactly the same thing, except this time it's in PHP not 'CORAL' '66'.

I wondered how this happened and it dawned on me that it's a result of the way that I originally learnt PHP - taking over someone else's code. They used globals, lots of examples use globals, and in a way it seems to be the way that things are done. I should have known better.

The problem is worse with PHP than before, as failing to declare a global variable using the global keyword just lets PHP invent a local one - and then the code fails, possibly silently.

Functions that provide access to the data is the solution that I adopted back in 1985, and it works just as well today. Hand in hand with this is the migration of the config.php approach, currently my config.php only contains the db connection details, the rest of the site configuration comes from a table, usually with a default value and a specific value to allow test & live sites to be configured differently.

So, instead of having

class ShowUser
    function output()
        global $config;
        if ($config['allow-view'] == 1)
            echo "Stuff";

the code is shorter, possibly more legible - and easier to maintain, for all the reasons that it was when I first thought of it.

class ShowUser
    function output()
        if (get_from_config('allow-view') == 1)
            echo "Stuff";

Usually I don't need to have a global for the DB - PHP hides that from me, and as most of the time one only needs one database connection, there is no need to wrap this up.

So it's just the config and the session that need functions - a total of four functions and much nicer code. I'm happy again.

The ends squeezing the middle in object oriented design.

Often when designing anything it is all to easy to forget the simple fact that when you add 1 to a number you will eventually be doing ADDQ.L #1, D2.

I follow, but not rigidly, the following rules of thumb

If you have a design with 2 classes you probably should be writing something procedural.

If you have a design with more than 30 classes it's probably time to revisit the design.

Now let me explain what I mean buy the ends squeezing the middle. It's actually quite simply really, it's a design rule that I apply to keep the design small and compact. The weight of the design really should be at the extremities, i.e. the structure and the implementation. When I've got a myriad of objects I tend to think that I've missed the primary point and that there is a real abstraction missing

The best example I can think of is the payments side of an e-commerce (or point of sale) application. We (the people) think of paying as being a distinctly different operation to that of buying, when in fact they are two sides of the same coin, and by simply treating a payment as an opposing value purchase it will enable much reuse of code. Failing to realise this will leave the code messy (JT deserves credit for this flash of inspiration and insight).

The main point that I'm trying to make is to distill down to the absolute minimum the number of classes in any given system; remember this:

It is better to find the similarities than to highlight the differences

The junk-it and start again methodology

Less well known than many of the other methodologies in use junk-it and start again is one of the most powerful techniques available.

This only really works well at the beginning of a new project. Junk-it doesn’t work with existing tried and tested systems and applying that is completely the wrong approach and will probably almost sink the company (as it did with Netscape).

I’ve seen, and worked on, systems that have grown out of the wrong technology, usually by productionising a prototype. This doesn’t often work out well as you tend to end up with a productionised prototype.

You will often gain more time by throwing away something than trying to make it work, especially at the start. We don’t want to end up with something like the following (vastly simplified model of a genuinely bad system I had to work on), colours indicate seperate layers in the original design, or at least they were supposed to be different layers.

Application of the wrong technology

When I was 16, coding in 6502 assembler on a BBC Model B, I built a word processor called WordStyle. It was neat, efficient and compact, and largely good enough for my purposes. However it lacked a spell checker. I only had 12k of space left (from 16k) and having discovered checksums in someone else’s code I figured that this will allow me to get a comprehensive dictionary into 10k bytes. So I built it. It worked, or appeared to. I then discovered that it didn’t really work because certain garbage words would pass the check and then realised that checksums were simply not good enough. Charged with enthusiasm and an unshakeable self belief I continued and made a few refinements, choosing really to ignore the basic problem that it was simply a bad idea. It was only when a trusted friend pointed out that a spell checker that couldn’t reliably check spelling was in effect worse than useless that I had to admit defeat.

Use version control.

Any version control, except possible SourceSafe, is your friend when making any large changes. I don’t mean branching specifically, I mean the ability to start from a checked in version, make masses of changes, get it horribly wrong and then dump the changes as though they were contagious, which quite often bad code can be.

You’ve got to really junk-it.

If needs junking then do it properly, use Delete Permanently (or rm -rf). If you’re not quite that brave then put it on a USB stick first, but it has to go completely, to help you resist the temptation to refer back to it

Then redo it from scratch and memory. You will feel, or at least I do, that you are simply doing the same thing again; but when you’ve finished the second version will usually be much better than the first and you’ll thank yourself for it.


Don’t blame me if it all goes horribly wrong. Do take a backup. Do keep it safe. Do not expect power to remain on constantly and instead save changes regularly to a persistent medium. Note that printing does not count as a persistent medium in this case and unnecessary printing may destroy rain forests, cause ozone poising and occasionally cause the printer to set on fire. Do not work in badly ventilated areas. Do not attempt to balance wheely chairs on their back two casters, it will end in tears. Do not attempt to store DAT tapes above where full coffee mugs may be present, DAT tapes fit nicely into mugs. Do not insert damp or wet DAT tapes into a drive as it may require replacement of the tape and the drive.

Using VSPAero to generate an aerodynamic model for JSBSim in FlightGear

Research into alternative methods of building aerodynamic models.

One of the challenges, probably the biggest single challenge, presented to an aircraft modeler is to make a representative FDM from any data that can be collected from publicly available sources. When there isn’t much data it has been a point of much discussion whether to use YAsim or to use a JBSSim model created with Aeromatic, Aeromatic++, DATCOM, or DATCOM+. If you can find wind tunnel data then always start with that; it’s going to give you a much better model than any of the computational prediction methods (including CFD that takes CPU years to complete).

So the challenge is to find a new method to generate a complete aerodynamic model. I started looking at this back in mid 2015 really as a way to create a BAe Hawk aero model.

There are a number of programs that I considered to do this;

  1. VSAERO – expensive
  2. APAME I didn’t manage to get any results out due to program instability
  3. AVL – found it hard to figure out what to do
  4. PANAIR – Boeing code, reported as being excellent, supporting subsonic/supersonic, but hard to get the right input files.
  5. PANUKL – no source code so not on my list
  6. TORNADO – seems good but requires MATLAB
  7. OpenVSP

Notice that my lists stops at OpenVSP; as the more I used it the more obvious it became that it was the best of the above list (for me). So I decided to go with OpenVSP for the next part of the process. This was to model an F-15 and to compare the results directly against my existing aerodynamic model which came from the wind tunnel data. Thereby producing a set of results that can be compared directly on the plots by overlaying one set atop the other.

After about 6 months of work I managed to produce an F-15 model that is similar enough to be flyable in what I’d consider to be close enough to the wind tunnel model.

Introduction to VSPAERO

VSPAERO is part of the OpenVSP vehicle modelling suite. It is based on linear potential flow theory and is can use either vortex lattic method or panel method. Other applications using the Vortex Lattice method are Vorlax, AVL, Tornado, HASC, etc. VSPAero does not represent thickness via panels on the surface rather it represents the mean camber surface.

The Beagle Pup experiment

For a good few years I’ve been following the work of Simon “bomber” Morley with interest. He has been building complex models where no data is available by separating the airframe into parts and modelling each component part individually; using XFoil (or other programs) to get the lift/drag curves and computing the rest. This approach is requires a lot of skilled work due to the inherent complexity but should produce representative results. The one area where the amount of manual work becomes prohibitively complex is the interaction of the flow with the surfaces and the interaction of the wakes – I believe that this is probably too complex to model manually due to the amount of calculations that are required.

So with this in mind I decided that it was time to build a model using VSPAero to compare Simon’s work against and decide on the advantages and disadvantages of each approach.

Simon kindly provided all of his data as a starting point; and I’ve manage to find other sources to supplement this data.

Beagle Pup Geometry

The first step of the process is to build the geometry using OpenVSP that will be used by VSPAero. OpenVSP takes a geometric modelling approach which is much better suited to computational aerodynamics than a polygon or mesh based model. Although the 3d models that are built in a 3d modelling package such as Blender can look exactly like an aircraft, most of these models do not have the resolution needed for any sort of computational aerodynamics.

So we’re modelling with numbers rather than drawings. The following are the basic parameters for the Beagle Pup that I am using. All dimensions are



Horizontal Stabiliser :

Vertical tail:

My model follows Rob McDonald’s advice of “(for VSP) with geometry less is more”; so the models are simple, using ellipses for the fuselage, and a low tesselation value. This is based on experimentation with low, medium and high complexity models. Something that looks right doesn’t necessarily generate more realistic data from VSPAero; and it is the geometry that is important rather than lining up with photographs. I’ve used a 3view photograph to estimate basic positions where measurements aren’t available, and then had to tune the exact positions based on the resulting coefficients.

With beta 5 I had to resort to some photogrammetry – simply because of a lack of decent data.

This is what I came up with – and I used this to tweak and align the geometry of the beta 5 version of the model.

Building the aerodynamic data using VSPAero

VSPAero 3.9.1 does have the ability to do “sweep” runs; which is useful for inspecting a range of values, however to build a full set of aerodynamic data requires around 5,300 invocations to collect all of the required aerohex (Cl,Cd,Cy, CL,CM,CN) datapoints and aero derivatives.

From the basic model we build a second set of degenerate geometry for processing by the aero processing tool; the input files are

VSPAero 3.9.1 introduced movable control surfaces, for the Beagle Pup these are used for the Ailerons, Elevators, Rudder and Flaps.

Example result

Each VSPAero run generates an .ADB file which can be viewed with VSPViewer

Modelling damage.

One of the benefits of Simon’s multi component approach is that it is possible to model the aerodynamic of damage (by removing components or adjusting individual parameters that relate to the components). Using VSPAero we can reasonably easily generate a set of linear damage coefficients the model includes

Propeller effects

VSPAero includes rotor simulation, and this is used to generate a complete set of coefficient modifiers based on the propeller induced velocity.

Results and tuning.

I started to build the geometry on the 8th October, and the first flyable model took 4 days (mostly computational time); as there are geometric tuning that must be performed to balance the model.

A further week was required to tune the model, add the propeller and damage effects and to perform a few complete runs.

Sample plots

The following are compared against the F-15, and whilst the two are very different the F-15 still provides a sound basis for a sanity check of the parameters.

A full set of plots is available in my document Beagle Pup Aerodynamic Model

VSPAero parameters

VSPAero has the ability to control the amount of wake iterations performed; for normal testing this is at 1; however this doesn’t generate much in the way of freestream interactions, so a value of 3 or 4 is used with a full run to get more realistic results from iterating the wakes.

Using VSPAERO mass properties to calculate the moments of ineria

Without wanting to get into a detailed description of what these are – they basically affect the ease at which the aircraft starts to rotate around an axis, and the difficulty it has in stopping rotating. Smaller values are easier, larger values are harder. Big aircraft have very large numbers.

Calculating moments of inertia (Ixx, Iyy, Izz, Ixz) is quite complex, requiring maths and lots of data, e.g. the triple integale and this is where the Mass Properties part of VSPAERO comes in very handy to help us with Ixx, Iyy, Izz and Izx (the other cross products can be ignored as they will always be very small values for a rigid body).

OpenVSP is basically dimensionless; what that means is that you mentally have to add “feet” or “meters” when looking at the numbers; and as long as you’re consistent with the usage all is fine. When generating the mass properties VSPAERO uses the density that you set. Initially I had a hard time understanding how to figure out the density and came up with some very wrong data, until I realised that I was stupidly thinking in lbs instead of slugs. So a quick conversion of the empty weight of the aircraft into slugs, adjustment of the densities to come up with a nearly correct mass (where known) for each part (i.e. engine, and the rest) and I ended up with the calculated mass being the same as the actual mass. This then means that the Ixx properties will also be right, as these are expressed in UnitOfMeasure/UnitOfMass^2.

VSPAERO Mass properties, component results

Name Mass cgX cgY cgZ Ixx Iyy Izz Ixy Ixz Iyz Volume
GearMain 0.07 8.33 4.05 -3.87 0.01 0.01 0.01 0.00 0.00 0.00 0.36
GearMain 0.07 8.33 -4.05 -3.87 0.01 0.01 0.01 0.00 0.00 0.00 0.36
GearMainStrut 0.00 8.32 2.56 -2.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GearMainStrut 0.00 8.32 -2.56 -2.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00
VTail 1.37 19.93 0.00 0.33 1.90 4.59 2.74 0.00 1.11 0.00 7.13
GearNose 0.07 3.63 0.00 -3.83 0.01 0.01 0.01 0.00 0.00 0.00 0.35
GearNoseStrut 0.00 3.89 0.00 -2.66 0.00 0.00 0.00 0.00 0.00 0.00 0.00
FuselageGeom 16.54 9.31 0.00 -0.14 26.18 243.71 248.00 0.00 2.87 0.04 119.02
Wings 3.76 7.72 7.30 -0.79 55.17 3.62 57.43 -0.57 -0.10 5.60 19.60
Wings 3.76 7.72 -7.30 -0.79 55.10 3.61 57.37 0.57 -0.10 -5.56 19.60
Engine 5.25 2.68 0.00 -0.22 2.48 3.50 4.40 0.00 0.00 0.00 6.73

VSPAERO Mass properties, results

Name Mass cgX cgY cgZ Ixx Iyy Izz Ixy Ixz Iyz Volume
Totals 30.90 8.25 0.00 -0.31 549.91 636.08 1145.26 0.00 17.29 0.08 173.16

Center of Gravity

8.25 0.00 -0.31

Moments of inertia

Ixx Iyy Izz Ixy Ixz Iyz
549.91 636.08 1145.26 0.00 17.29 0.08

Deciding on the number of slices to use for Mass Properties in OpenVSP

Refer to the results on the following illustrations (rather than looking at the picture).

The default value is 21, which is reasonable.

65 Slices is better.

200 Slices is more accurate; but in the end 65 is probably close enough.

Halloween update.

Tuning of pitch handling;

This provides a much more stable pitch response.

Aero for prop effects is now by default turned off but can be added in from the preferences dialog. Aero for gear removed as it is causing instability.

Remembrance day update.

Fixed (or improved) flight handling.

To take off – throttle up whilst controlling the gyroscopic prop effects with the rudder. With no flaps smoothly rotate at 65kts keeping the yaw and roll under control. If you lose control of the yaw then you’ll get excessive sideslip which in turn destroys the lift, generates roll and results in an undesired ground interaction. With 38% (one notch) of flaps you should be able to rotate slightly earlier, but the climb out will be slower and you’ll need to be careful with the power and not to stall.

Landing I tend to come in at 50-60 kts with one notch of flaps, possibly two notches. Depending on weight you’ll stall at around 45 kts.

With this model you do need to keep a watch on the yaw. Rudder will be required to control this. I think this is like the aircraft based on comments that I’ve read – and this is coming out of the aerodynamics – it’s not something that I’ve added.

At this stage let’s keep the testing to take off and circuits – to provide comparative results use Fair weather.

St Catherine’s update

It appears that most of the problems with the twitchy response were caused by a combination of factors, but primarily that the stability derivatives Side force due to yaw (CFYR) , Yaw moment due to roll (CMNP), and pitch moment due to pitch (CMMQ) had the wrong sign along with Yaw due to Aileron (CMNDAD).

The moments of inertia were way too large, but after understanding how OpenVSP does this the new set is much better.

The flaps may still be quite wrong; more investigation is required here.

Beagle Pup with VSPAero Beta 5

St Andrew’s day update

This is beta 5 – the pitch moment has again been reworked based on attaining consistent pitching moment across a range of alpha values for the wing. This is 7.257 ft.

The moments of inertia have been tuned to be twice the value that OpenVSP mass properties calculated. I suspect these figures are possibly correct and that given the right mass distribution OpenVSP would certainly be able to calculate the right values, but I don’t currently have this sort of data so for this beta I’m going with a set of tuned values.


Detailed Beagle Pup aerodynamic data plots

Beagle Pup model – drop in additions “FGAddon Beagle Pup” to add this FDM:

Simon Morley’s Engines – extract these into the FGAddon Beagle Pup. These are required.

Licencing information

beagle-pup-splash-small.jpg104.41 KB
Beagle-Pup-Bomber-Engine_Propeller.7z3.47 KB
Beagle-Pup-FDM-RJH-2016-11-13.7z394.98 KB
Beagle-Pup-FDM-RJH-2016-11-15.7z260.49 KB
PitchTest(2) -Beta4 FDM XML - replace this for 1/3 pitch authority from elevator110.04 KB
Beagle-Pup-Simon.zip43.46 KB
PitchTest(3) -Beta4 FDM XML - new MOI and elevator / rudder control sufrace moments18.36 KB
Beagle-Pup-FDM-RJH-2016-11-25.7z347.67 KB
Beagle-Pup-FDM-RJH-2016-11-30.7z267.91 KB
beagle-pup-rjh-geometry-openvsp.jpg222.08 KB

WPF Entity Framework Listbox / Datagrid filtering using CollectionViewSource / CollectionView

Filtering a listbox, datagrid, or any list control within WPF for a given entity from an entity framework collection should be easy; and it is once you've figured out how it should work.

The great thing about WPF and bindings and the Entity Framework is that it actually does so very much for you. So much that when something isn't just there it seems odd, and applying a new rule of thumb that after 6 months or so of working with WPF I've finally realised

If you're trying to do something with WPF that seems hard or requires lots of code then almost certainly you're doing it the wrong way.

So how do we filter. For some collections there is a filter method that's nice and accessible; but not immediately available. So the steps are:

1. Create a new CollectionViewSource that links your entity framework entity to the target control. In the Resources section of your xaml:

    Source="{Binding ElementName=supplierList, Path=SelectedValue.Products}" 
    x:Key="cvs" Filter="Product_Filter"
    CollectionViewType="{x:Type dat:ListCollectionView}" />

The important part of the above is the CollectionViewType - without this it won't work. As this refers to dat: this needs to be defined at the top:


2. Bind your chosen control to the new CollectionViewSource

<w8:DataGrid ItemsSource="{Binding Source={StaticResource cvs}}">

3. Create a textbox to use as the filter text - with an OnChanged event to applying the filtering as each character is type.

    <TextBox Name="productFilter" TextChanged="productFilter_TextChanged" />

4. In your code module you need to provide two things, firstly the filtering for the CollectionViewSource

void Product_Filter(object sender, FilterEventArgs e)
    if (e.Item is Product)
        e.Accepted = (e.Item as Product).Name.ToUpper().Contains(productFilter.Text.ToUpper());
        e.Accepted = true;

5. The textbox OnChanged needs to ask the CollectionViewSource to refresh itself;

private void RefreshList()
    if (ProductList.Items is CollectionView)
        CollectionViewSource csv = (CollectionViewSource)FindResource("cvs");
        if (csv != null)

private void productFilter_TextChanged(object sender, TextChangedEventArgs e)


That's all of the complexity that there is - although this took me nearly a day to figure out - and many wrong tracks. The beauty of WPF and the Entity Framework - there is so much power and flexibility that it is possible to do something many ways. You know you've got the right way when it's about 10 lines of code.

Web Application Architecture

Frequently I see the same question in many forms; “How do I design a web application architecture”. The simple answer is that all application architectures are different, depending on the actual needs. Instinctively most programmers know what’s required, or at least we think that we do.

The true skill is to realise that we are often wrong and that any architecture design needs to be validated.

Pragmatic approach to application architecture

Pragmatism is one of those often misapplied terms that is often used interchangeable with “in my experience”. If you trace back the origins of pragmatism to its philosophical base in the wisdom of William James he defined pragmatism as “the truth of an idea needing to be tested to prove its validity”. This exactly equates to the way that we should build anything complex, by using the practical over the theoretical.

There are all too often basic premises upon which systems are designed that are simply wrong and all too often these premises can overlap with a previous approach or even fall foul of the dreaded second system effect.

Theoretical approaches can work well. New theories need to be tested and it is by doing so that we can translate theory into practice.

Web Application Architecure Diagram

The following is the result of a design that took around 6 months of testing and tweaking. It is a second generation partial redesign of a real live web application architecture that is in production, or was last time I checked.

Web Application Architecture requirements analysis.

Step 1 of any design process is to analyse the requirements and produce a set of goals that the architecture must fulfil. Sometimes, and unfortunately, technology is mandated for business or operational requirements so these have to be included. Rarely by proving that technology is simply unsuitable it is possible to replace it with something better. Avoid the trap of thinking something is better simply because it is familiar. When working in a team try to avoid comfort zones. I once worked with someone who was an absolute wizard with Finite State Engines and could produce miraculous results. The only snag was that the problem was often moulded to the solution, when in fact something other than a FSE would be a better (and more maintainable solution).

Analyse and write down your requirements. If reliability isn’t the primary goal then rewrite your requirements until it is. If maintenance and extensibility aren’t in the top five then review again.

Why we should not use enums in databases.

The problem with enums in databases are many fold. They are tempting because it seems like they make your database more readable, however. I avoid using them and as of today I’ve never found a good use for them which doesn’t fall foul of one of the many problems that designing a database which contains enums leaves you open to.

Enums fall into that area of programming that seems to make things easier, but comes with a whole load of problems that will hit you later.

Making the database more readable is a Non sequitur, databases do not need to be readable but data models need to be consistent and well ordered.

Why I don’t use ENUMs8. First, it’s not standard SQL (at least to my knowledge), and historically hasn’t been consistent between Database products. More importantly as data, it’s terrible to maintain easily, as it’s within the table structure.

The problems with enums in database

Firstly they look like text, which is a bad thing because we understand text and databases don’t really, so we can read them (providing that they are in our native language) and easily see what they mean. However this is only important during development, and besides you get to know what the value 212 means after a while.

The big problems with enums explained.

The big problems with enums in my list above are

Taking each one of the above, firstly lets looks at they way that an enum locks the database into a version of the world when the database was designed. anything.

I know it seems that there are a lot of things, such as gender that will never change, but they could, and that is why we need a database model that can handle

For example I’ve seen gender defined as combinations of the following, Boy, Girl, Male, Female, Born Female, Born Male, Not Specified.

A big problem is that if you misspell a constant of variable you will find out really soon because the code will not compile. If you misspell and enum, or if the enum text is changed then the code needs to be changed. This is because we are mixing up the identity and the description. Two distinct values within the database may have a different identity but for display purposes they may be the same.
bq. Decoding and display of data is something that needs to be performed in the view. The model should only model the data. Enums break this.

Changing sort order with an enum field isn’t possible, you get the order in which they were created. So you have to enter them in the order you want them displayed.

You can’t have enums sorted in different ways by the database – you’ll have to do this in the view. Aren’t enums supposed to make things easier??

My solution to avoid lots tiny tables to store enums

My solution, Parameter Enumeration Tables (PET) and adds a little complexity, but the compromise is worth it.

With a PET uou get the ability to store, maintain, and have referential integrity without having lots of tables


ZXAF Development notes

ZXAF is really the culmination of a lot of individual techniques that I've used in many different developments.

So, I decided to pull of these together into a project, release it as opensource, and continue developing it with a view to doing everything the right way. It is a development without a timescale or a budget, and its nice to be able to spend time on pure development.

Basic Class Structure

ZXAF is built from a few key components (classes/objects) and largely follows
convention over configuration and DRY.

The system core classes all resides in the exec directory. The modules are as

auditlog.php Provides audit log messages together with a recipeint that will log
all of these to the database
config.php Responsible for loading the correct site based config
dbentity.php Core of the database access, provides DbEntity and DbIterator
emesary.php The interobject messaging classes. Understanding this fully is
crucial to working with ZXAF
refmon.php Security and access control following the
sysident.php System identifications for use by the messaging
system.php System glue classes - provides most of the biasic objects and
functions, connects to the database, provides session management etc.
user.php User entity - provides layer between the database table and the
higher level CBF user object.

In terms of the classes the first to understand is those related to the
database, namely DbEntity and DbIterator.


The DbEntity usually maps an object to a database table, providing INSERT,
UPDATE, DELETE and SELECT functions. Within the constructor the keyfield must be
defined, after which point the load(id) method can be used to
get a record from the table. If a table has multiple keys then the method of
loading is subtly different, in that you must set the fields using the
methods and then call load_from_fields(), at
which point if successful true will be returned and the entity will contain the
contents of the first record that matches the fields specified. NOTE
this really works best when the fields are part of a unique key.

Example DbEntity

The following is a complete implementation of a database table.

In the constructor, it is necessary to call the parent constructor to define
the table name and the keyfield. If the table_autoid is set
then the keyfield will be assumed to be an auto_increment.

If required the set_auto_date_fields can be used to
automatically set the two defined fields to automatically contain the creation
and last update date/time for each record.

In the sample below the contstructor takes an optional key
parameter, which if present will request that the record is loaded. If this
fails then an error will be raised.

class DbTransaction extends DbEntity
    function __construct($key=null)
        $this->table_autoid = true;


        if (!$this->init_done)
        if ($key !== null)
                cbf_error("Failed to load Transaction id: $key ");

    function create()
            return false;

        return true;

    function get_id()             { return $this->get_field('id'); } 
    function set_id($v)           { return $this->set_field('id',$v); }
    function get_created()        { return $this->get_field('created'); } 
    function set_created($v)      { return $this->set_field('created',$v); }
    function get_modified()       { return $this->get_field('modified'); } 
    function set_modified($v)     { return $this->set_field('modified',$v); }
    function get_user()           { return $this->get_field('user'); } 
    function set_user($v)         { return $this->set_field('user',$v); }
    function get_item()           { return $this->get_field('item'); } 
    function set_item($v)         { return $this->set_field('item',$v); }
    function get_quantity()       { return $this->get_field('quantity'); } 
    function set_quantity($v)     { return $this->set_field('quantity',$v); }
    function get_amount()         { return $this->get_field('amount'); } 
    function set_amount($v)       { return $this->set_field('amount',$v); }
    function get_type()           { return $this->get_field('type'); } 
    function set_type($v)         { return $this->set_field('type',$v); }
    function get_paid_by()        { return $this->get_field('paid_by'); } 
    function set_paid_by($v)      { return $this->set_field('paid_by',$v); }
    function get_reference()      { return $this->get_field('reference'); } 
    function set_reference($v)    { return $this->set_field('reference',$v); }
    function get_reference_email() { return $this->get_field('reference_email'); } 
    function set_reference_email($v) { return $this->set_field('reference_email',$v); }

    static function begin($where="")
        return new DbTransactionIterator("SELECT * from `transaction` $where");

};//class Transaction

CBF modules

Typically the DB modules as indicated above are
autogenerated by a tool, and it is by convention in the cbf/
directory that customisations to the basic database entities are performed to
provide the model.


class CbfTransaction extends DbTransaction
    function write()
        return parent::write();

    function initialise_validation()
        $this->set_validate_field('amount',new ValidateEntityPositiveInteger());

    function make_load_query($record_id)
        $q = "select *,transaction.quantity*transaction.amount as total from ".$this->table." where ".$this->id_field."='$record_id'";
        $q .= $this->extra_sql_load;
        return $q;


The DbIterator is used to locate one or more matching
fields. Generally it is constructed with a SELECT statement.

An example of usage is below. The $dbi variable is the
iterator, which is a container that allows moving forwards through a result set
(any other movement is not supported). if the begin method
returns true the container is valid and contains elements.

To access the DbEntity that is contained within the
use.the current() method, which will return
an object of the type expected (as defined in the constructor of the

 $dbi = new DbTransactionIterator("SELECT * FROM transaction");
if ($dbi->begin())
		$re = $dbi->current();
		$output .= $re->get_item()." - ".$re->get_amount();

Summary so far...

With the two classes DbEntity and DbIterator

Breaking rule of thumb no.4

I've just realised that effectively I'm breaking my own rule of thumb No.4 "You don't need to write an application framework", and here I am writing one.

So I need to clarify this, partly for my own sanity, and investigate why I'm doing this. To start with the rule of thumb No. 4, looking back what I meant was simply "when faced with building something that is new and confusing, it is a big mistake to start off by writing a framework because that's the only way to begin". In this sense, and fortunately, ZXAF falls outside of the rule because I'm pulling together various bits of code that have been around, in production use, for many years.

What I'm doing with ZXAF is taking all the various bits of good code and putting them together in a coherent manner to get rid of what I call the middle-aged code spread. This is something that I've observed happening often. You start off with a nice tight set of classes and a good design, and as real world pressures (i.e. delivery) creep in there simply isn't time to rework the design to make it able to handle a new requirement, so it's more expedient to add something that is similar but different to something that was already there.

This is exactly how I ended up with my ZXAF Views and Forms problem whereby the view and the form should have been one thing but actually the Form came first and then the View came along. Neither were right, the Form was good a presenting a Form (i.e. two column form with prompt and value of a single entity), and the View was excellent at displaying a row per entity based on an iterator. To further confuse things the View had started out life as a Form and then been heavily modified to fit a specific project requirement. The View introduced the concept of Items and Transformations, but didn't really support editting.

The net result was that both items had to be reworked because neither of the two classes individually were good enough. The end result is the View class that is now present in ZXAF.

The resulting design (the View) is the results of a code evolution of stable code being reworked (or refactored for the buzzword conscious).

This single case really illustrates what ZXAF is all about; reducing and producing something that is more consistent and coherent. There are still areas of general ickkiness within ZXAF as it is an evolving entity rather than being a rush to publish.

In conclusion I don't consider that I've broken my own rule, (MRDA applies).

Building a page with ZXAF.

This is a mini tutorial of how to build a page, consisting of the required entities and views to produce a fully functional page that shows the built in dynamic updating and demonstrates the ease of presenting different views.

Anything within ZXAF is built from the following core components.

The other thing that ZXAF guides you to is seperation of the layers of the logic.

  • Database
  • Business Logic (CBF)
  • UI

Step 1 - creating the entity

Most things that we create need some some of model or storage behind them. This is where we use the ZXAF DBEntity which provides both model and storage. Usually the DB entity will be extended to provide a CBF version of the entity with extra features, although this isn't required and DBEntities work just fine for most things. The reason that I usually extend the DB entity to provide a CBF version is the knowledge that probably, at some point in the future the DBEntity will need to have extra facilities. The class division convention within ZXAF dictates that the DB Entity must be a simple representation of a DB record, mainly because DB Entities are automatically created and may be replaced at any time.

The following is the SQL that will create the table in the database. This is stored in the file db/db-create-user.sql which is where the user database schem should be stored.

    `id` int(11)  NOT NULL auto_increment,
    `category` varchar(100) default '' ,
    `section`  varchar(100) default '' ,
    `topic` varchar(100) default '' ,
    `help` text  NOT NULL,
    PRIMARY KEY  (`id`)

Taking this SQL file and running it through the creating tool via awk -f exec/mkdbe.awk db/db-create-user.sql will produce the db/help.php. This file provides the database entity, together with the iterators that will be used to access it

The DB entity

The following will be created by the above script, as you can see it is provides an Iterator and a Db Entity that you is used to access the DB

class DbHelpIterator extends DbIterator
    function __construct($_query)
        parent::__construct($_query, new DbHelp());
class CbfHelpIterator extends DbHelpIterator
    function __construct($_query)
        DbIterator::__construct($_query, new CbfHelp());
class DbHelp extends DbEntity
    function __construct($key=null)

        $this->table_autoid = true;

        if (!$this->init_done)
        if ($key !== null)
                cbf_error("Failed to load Help id: $key ");

    function create()
            return false;

        return true;
    function get_id()             { return $this->get_field('id'); } 
    function set_id($v)           { return $this->set_field('id',$v); }
    function get_category()       { return $this->get_field('category'); } 
    function set_category($v)     { return $this->set_field('category',$v); }
    function get_section()        { return $this->get_field('section'); } 
    function set_section($v)      { return $this->set_field('section',$v); }
    function get_topic()          { return $this->get_field('topic'); } 
    function set_topic($v)        { return $this->set_field('topic',$v); }
    function get_help()           { return $this->get_field('help'); } 
    function set_help($v)         { return $this->set_field('help',$v); }
    static function begin($where="")
        return new DbHelpIterator("SELECT * from `help` $where");


Creating the CBF object

Create cbf/help.php containin the following. As explained earlier this is really just to get started. In the future extra logic related to the model may be added in here. For now there is none so a skeleton will suffice.


class CbfHelp extends DbHelp

Using the CBF object

Basically you can easily create, load and delete records

// create a new record. ID is automatically assigned.
   $helpr = new CbfHelp();

// the $helpr now contains a valid record which includes the created id, so 
// load a copy into another variable.

   $help2 = new CbfHelp($helpr->get_id());

// Now use an interator to dump the entire table.

   $helpit = new CbfHelpIterator("SELECT * FROM `help`"); // illustrative normally SELECTs belong in the Cbf/Db
   // or $helpt = CbfHelp::begin();
   if ($helpit->begin())
          $record = $helpit->current();
          echo $record->get_id()." : CAT= ".$record->get_category();
      } while ($helpit->next());

// now get rid off the record.


Creating the View

View creation is relatively simple. As documented elsewhere a View in ZXAF is built from a ViewItemList which is merely a set of ViewItems. Each ViewItem relates to one field from the entity. Firstly though we will need to include the pre-requesites:

require_once "views/view-main.php";
require_once "cbf/help.php";

$entity = new CbfHelp();

Now we can create a simple view by doing the following:

$vil = new ViewItemList();
$col = &$vil->add(new ViewItem("category","Category"));

$col = &$vil->add(new ViewItem("section","Section"));
$col = &$vil->add(new ViewItem("topic","Topic"));
$col = &$vil->add(new ViewItem("help","Help"));
$col = &$vil->add(new ViewItem("type","Type"));
$col->set_output_transform(new ViewItemTransformParams());
$col = &$vil->add(new ViewItem("visible","Visible"));
$col->set_output_transform(new ViewItemTransformCheckbox());

$col = &$vil->add(new ViewItem("id",""));
$col->set_output_transform(new ViewItemTransformFieldAction("helpsite","form1"));

That's the view created. Each ViewItem constructor takes two arguments the first is the field name, the second the field description (for use in headers/titles/prompts etc). The Type element has an associated ViewItemTransformParams which will take care of the translation between the numeric ID and the related record in the params table. This concept of the ViewItem providing an transformation routine to manage the output and input of individual elements is documented elsewhere and is a core part of the flexibility of the framework.

The ViewItemTransformFieldAction provides the Edit and delete buttons. Notice that it takes the ID (which is the entity ID), so that it knows which record to pull up for view or delete

The Visible element is presented via a checkbox transform. If we remove this the raw value of 1/0 will be output and may still be editted, but this is a good indication of what the ViewItemTransform concept is capable of.

The view is responsible for providing the output, which could be a input form or a list which will be presented in the form of a table. There are three main Views available by default

  • TableView - readonly tabular output of all elements as provided by the DBiterator
  • TableViewEdit - allows modification. There is added flexibility here as the edit wrapping is provided by either a click to edit version (ViewElementProviderInPlace), or by an element provider that will output either static or editable fields depending on the view (ViewElementProviderModeBased). There is also ViewElementProviderOnChange that allows elements to cause updates to other views when changed.

Now we need to create the views. All of these three views are simply a different presentation of the contents defined in the ViewItemList. Built into all of the views is the automatic update (via Ajax) when any value is changed. One the example there is a fourth representation of this entity which is the menu on the left hand side. Notice that this is also automatically updated when any changes are made.

$tv = new TableViewEdit($entity, $vil, 'formx');
$v2 = new TableView($entity, $vil, 'form2');
$v3 = new FormView($entity, $vil, 'form3');

Creating the page

All that is required now to create a complete working page (within the template provided by the standard webpage object) is to create an instance of WebPage and request that each view outputs its content.

$page = new WebPage("Help");

Update is performed automatically, as is save. There are clever things that can be done with the View, the ViewItemTransform and the ViewContainer to get different results, such as using Jquery to add scrolling to a table, or providing a postback based table paging

Directory structure and inclusion of third party libraries into projects

I'm now using zxaf for a project, it's tried and testing code just repackaged slightly so it makes a lot of sense. What doesn't make sense though is the directory structure that the current version has. The reason being because it really doesn't lend itself to being part of a larger system, or more importantly part of a composite system.

During the initial design it looked as though it wouldn't be a problem. I thought that I'd probably use ZXAF in it's entirety for any given project, so rather than spend time trying to figure out something super scalable and elegant I went with a simple approach. The original idea was that zxaf would be integrated by exporting from svn (or extracting from tar) into the root of the project. Sounds plausible, and actually it probably would work well. However what it doesn't allow me to do is to work on zxaf and the project at the same time and have the whole lot controlled by different SVN repositories.

Inclusion of third party libraries into any software project or package or system has always been, for me at least, a dilemma, because in one way I don't want the source tree polluted by a whole raft of unrelated source code, but at the same point I want to be able to checkout the project from source control and be able to build and run with it, reliably and consistently. Although this is really more of a problem with compiled and linked projects (because there all we need are the libs), it still applies to PHP, but I suspect that the solution is different.

What the solution will finally be is still something that I'm thinking about, but it looks likely that it will be a case of putting ZXAF into a sub-directory, or maybe just moving around the areas that are changed for a project (the DB, CBF, and VIEWS), such that there is a system version of these, and a project version will be sufficient.

Event Messaging in ZXAF

The hardest thing to grasp completely with ZXAF is probably the way that the inter object communications can be used.

The easiest way of showing this is to refer to views/view-main.php, search for GlobalEventBus::notify_all and you will see many occurrences. Each one of these messages sent is really related to the standard stuff that you need to be in the page container. The beauty of the event notifications is that the view module does not concern itself with how the message is handled, all it knows is that someone, somewhere will receive the message and do the right thing with it.

This is important because it decouples view-main from the WebPage container, it frees view-main from specific chunks of code related to templates.

The other benefit is that the recipient can manage the messages as they come in, for example when using the following to request the inclusion of a javascript module:

GlobalEventBus::notify_all(new MessageRegisterJavascript(MessageRegisterJavascript::Type_Script, "javascripts/jquery.timeentry.js"));

It is the recipient that can decide to only do this once, so each object within view-main doesn't need to worry if that script has already been included it can simply state that it needs the module included, and be confident that it will be included

Another example is the callback processing during an Ajax update. The objects in view-main don't need to be concerned at how the updated HTML will be fed to back to the browser, all they need to do is to create the HTML together with the identifying key (id) and send a message as follows:

GlobalEventBus::notify_all(new MessageAjaxUpdate($key, $html));


Upon receipt the WebPage can store this, and later on during the processing it will convert this into JSON, output it and exit, because the WebPage knows that this is the right action, whereas the objects in view-main do not need to have this logic wired in.

As said at the beginning this is a hard concept to really grasp, and these worked examples on start to demonstrate the power that can be harnessed from this technique

PHP Implementation of a reference monitor to provide record level access

If you refer to the classic design of a reference monitor

To add record level protection to certain tables firstly we’re going to use the “interface record segmentation” method, which allows us to provide the authorisation database within the protected subject – it is the subject that becomes responsible for providing ownership, group and protection information. This works well with tables in a database implemented at a low level within the active record implementation, however it doesn’t protect the database or entities against direct access via SQL (because that’s not what we’re trying to achieve, and to do that would require a traditional reference monitor wired in a the driver level).

With this approach protected objects need no extra code to support

  • prevent unauthorised access by providing an ID
  • filter lists (via DbIterator) to only contain accessible objects
  • protection against modification of records
  • protection against deletion of records
  • granting of access to a record to a user, group or everyhone

Description of elements of the ZXAF reference monitor

Element Description
Subjects DbEntity derived items (tables/records)
Objects SystemUser derived entity
Audit trail Not yet implemented, but will eventually provide a record of all security-relevant events, such as access attempts, successful or not

Implementation overview

Added interfaces, namely IControlledObject for a DBentity and ISubject for User

To store the data that is required (on record on all protected items) is:

  • IControlledObject.Protection INT
  • IControlledObject.Owner INT
  • IControlledObject.Group INT
  • ISubject.Group INT
  • ISubject.Privileges varchar(255) – comma seperated list of permission names

Reference implementation

Record interface segmentation - Object Mapping

To add object protection to ZXAF the Reference monitor requires protectable objects to provide interfaces, namely IControlledObject for a DBentity.

To store the data that is required (on record on all protected items) is:

  • Protection INT
  • Owner INT
  • Group INT

To implement this we will add fields to each record corresponding to the above, when these records are present (detected by DbEntity) it will effectively provide low-level object protection via the reference monitor.

This I’m calling record interface segmentation – where a record in the DB has extra fields that relate to the interface that the implementing object implements, and the presence of these fields means that the interface is implemented

The ISubject requires a UserID, GroupID and list of permissions, which is implemented by adding the the following fields

  • Group INT
  • Privileges varchar(255) – comma seperated list of permission names

The net result was a quick and easy way to add interfaces in an extensible way to an existing object – and the existence of the fields can be used to determine if the interface is applicable. Which in this case is used to determine if a DB table (record) is protected and needs to have permission checking via the reference monitor performed


View update complexities

On a page where there are lots of views but all from the same entity there is an interesting challenge when an individual item is updated within a view. Normally this is catered for during a full postback simply because the entire page is recreated dynamically, so it is enough that the post is processed during the construct phase.

However this isn't true when we are using an ajax update to modify an item.

So what we need to happen is when any item within a view is updated all views that reference the item should also be updated. There was an immediate and elegant solution using one of my current favourite techniques which is to effectively label elements within the DOM by adding a class to them. The all that had to be done was to return a simply text value representing the new value of the item.

This all worked fine, nice and easy with a single javascript call


Except that it completely broke most of the rules of good development, and whilst it worked for a simple case it didn't work for anything more complex.

So, what is needed is something that will allow all entity items to have their inner html updated using the same objects that originally generated them.

This is further complicated because, of course, there isn't a coherent list of the views that are currently used, and no cross reference between them, and furthermore I really don't want to add this as each view should be as independent as possible. Within the PHP I can send out a message via the global post office which could be received by the views, except that doesn't necessarily help to get the data onto the web page, and global messages aren't really the right solution in this case (because of the way that the objects are constructed and updated processed in the constructor).

I don't want to change the way that the objects are constructed; there's still a large post-it on top of the monitor that reads ZXAF - NO COMPROMISES.

There are two distinct parts to the solution, the first being obvious in that the ajax should return JSON containing all of the identified DOM elements that need to have the HTML replaced by that provided.

The second part of the solution was very difficult to get right, and I hope it's right. Initial impressions look good, but it's new and hasn't been onto the battlefield yet (whereas most of the original code came complete with years of battle scars).

The solution to the PHP update problem was a large rework of the way that DOM elements were named to make them consistent and identifiable. Then to change the naming of the form elements to be based on their entity item name. This was the difficult bit to figure out, and I think it's right. The logic is as follows:

Where a form references an entity any updates to any part of that entity are relevant to all forms that reference the same entity, so any form that uses the same entity is equally capable of updating the entity. As the update logic in entities is smart, such that only changes are written to the backing store.

What that means is basically that it doesn't matter which view prompted the update, any view can process it, and will process it, because by processing it we can get the HTML for the update that is required, and build this into a JSON update message.

The actual building of the JSON update is performed by the WebPage object, via messages.

So, changes made, and with a few small faults that were easily fixed, it all works fine.

I can have an entity on a view, update it from that view, and all the items on the web page that reference it will be updated.

At the same time I also implemented the ability to cause this update mechanism to support linked entity items, such as an href, where the text and the target are usually two fields. It looks promising that the design is good because this change only took about 5 lines of code, and I was expecting a lot more.

ZXAF Views and Forms problems [resolved]

This has now been resolved with the new views that arrived in r11; however I'm keeping the page because it is a good indicated of middle aged code spread

Currently we have the view-main module; however we have developed into a situation where there are two different methods for presenting form and or table view (for display and/or edit).

So, we have ViewItems and FormFields, and they are needlessly different as their work and function is very similar; the major difference being that FormFields are currently much more closely tied to the DB entities, and as such can directly update values; whereas ViewItems merely have an ID and expect not to be able to modify their data (in this sense a View is originally Readonly, whereas a Form was always intended to allow modification of elements).
This stems from the way that the view module has been built by refactoring existing working code and we really need to fix this sensibly.

A form is created as follows:

$fields = array();
$validator = new ValidateFieldRequired($edit_buttons);

$ffp = new FormFieldProvider($edit_buttons);
$fields = array();
$form = new Form("form1", $fields,"Details","HelpSite Details","admin/helpsite");

$form->add_field ( new FormField($si, "Category", "category", $validator, $ffp));
$form->add_field ( new FormField($si, "Section", "section", $validator, $ffp));
$form->add_field ( new FormField($si, "Topic", "topic", $validator, $ffp));
$form->add_field ( new FormField($si, "Help", "help", $validator, new HtmlEditorFormFieldProvider ()));

It is processed as follows:

$entity = new CbfEntity();
$form = make_form($entity);

if ($form->ajax_process($entity))

This is fairly clean and quick, although the Form constructor needs rationalizing to remove the quantity of fields it is tolerable.
Each form field must be constructed with a FormFieldProvider derived object. This object is used to define the way that the form field operates.

For displaying lots of data (from an iterator) we use a TableView, which is more developed than the Form, however it's not as good at editting: typical use of a TableView is as follows:

$tv = new TableView('id');
$col = &$tv->add_column("user","UserID");
$col = &$tv->add_column("type","Type");
$col = &$tv->add_column("date","date");
$col = &$tv->add_column("table","Item");
$col = &$tv->add_column("actioncode","Action");
$col->set_output_transform(new ViewItemTransformHtmlSpecialChars());
$col = &$tv->add_column("ipaddress","IP");
$col = &$tv->add_column("additional","Details");
$col->set_output_transform(new ViewItemTransformDetails());
echo $tv->get_output(new CbfDbIterator());

Internally the view works closely with the iterator and the db_entity. By default the value of the field specified in add column (param1) is merely output, however we have the concept of an output transformation, which is very useful as it is an object (derived from ViewItemTransform) that will be responsible for preparing the value for output (the output is still done by the class).
The output transformation can be used in a simple case to format decimals, however it can also lookup values from other tables, and substitute names for ids etc.
Also output transformation objects may be chained by passing an output transformation object to the constructor.