Zero to A.I.

Artificial Intelligence has long been the preserve of science fiction.  However towards the end of last year I was asked to do my usual trends lecture on the upcoming 12 months.  It struck me during this preperation that for the first time I could see the possible emergence of A.I. capability in the very near future.  This is not because recent advances with personal digital assistance such as Apple’s Siri or the fact that Facebook has started using pretty innovative algorithms to exert learning from its social network community or even the number of acquisitions we are seeing amongst silicon valley’s elite.  To understand why I have arrived at such a conclusion we need to step back and answer a few fundamentals.  Moveover we need to examine the evolution of computing and how it can be leveraged to bring intelligence to its current mere data processing role.

 

Lets begin this journey.

 

216 years ago Michael Faraday started to understand the properties of the electron and in particular the effects of electromagnetism. 1897 Sir Joseph John Thomson first proved the concept of the electron being a particle.  Probably the most defining moment that enabled the birth place of modern electronics and computing.  Without the properties of the electron and how electromagnetism is conveyed by these fundamentals of quantum mechanics we could not possibly even begin to develop computers that are able to manipulate electronics in such as way as to develop out a stored programme.  Of course Charles Babbage proved our ability to mechanically create a stored computer programme through his invention of the Difference Machine in 1822, however without JJ Thomson’s discovery Charles Babbage would have been left fashioning new gears for his machine at a faster rate than the Saturn V burns rocket fuel, if it were to stand any chance of computing the simplest of spreadsheet formulas.  Clearly the electron was the only way to go. It was then over to the likes of Alan Turing, John Bardeen, Walter Brittain and William Shockley, to name but a few of the great men, to continue the journey of electron and electromagnetic manipulation to evolve into today’s computer.

 

This allowed us to get this far, which I will pick up later in this article, however if we are to even begin to understand how we could possibly turn today’s modern computer into anything near intelligent we must first answer a few basic questions and ponder a few basic thoughts?

So first and foremost how do we define intelligence?

There, indeed, maybe many ways to answer this simple question including one’s capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. Alone this seems like an impossible list for today’s computing to grapple with.  But focusing on the key words of logic, communication, memory, planning, problem solving all are great traits of today’s modern computer. So we may have something to work with after all.  Moreover another view is the ability to perceive information and retain it as knowledge to be applied towards adaptive behaviour within an environment. In essence the ability for the machine to learn from its environment therefore gaining knowledge and perception.

Lets pause on this for a second, if we can perceive information, learn from an environment, extract knowledge, problem solve, communicate, process logic, then there are untold applications for today’s digital world. Consider big data alone and what this means to our understanding when processing large amounts of visits and or transactions on the average digital estate of the average bluechip. This is big! This is very big for our world?

 

But how do computers compete with humans?

Take our brains which are twice as large as our nearest intelligent competitor i.e. the Bottlenose Dolphin. Three times as large as a Chimpanzee. Our brains are arguable the most complex single object in the known universe.  85 billion neurone cells plus another 85 billion other cell types. Each neurone are electronically excitable cells with between 10, 000 to 100,000 connections creating neurone networks that train the flow of electrons to encode our memory and intelligence. In essence the training and building of these networks forms our software.

 

85 00 000 000 000 000 neuron connections

vs

4 00 000 000 000 stars in our galaxy

Pretty complex to say the least.

 

You could argue nature 1 vs Electronics 0

 

However consider today’s most powerful microprocessors on average we are looking at

100 000 000 000 transistors (switches)

vs

85 00 000 000 000 000 human neurone connections.

 

But obviously a switch cannot compete with a neurone and its connections each of which is comparable to a software programme in its own right, however when you factor timescales.

Nature 30 billion years

vs

Electronics 216 years

You could argue it’s more than achievable and is just a case of when rather than if.

After all each modern computer is just a bunch of circuits operating at high speed.

Back to Basics

Lets see the fundamentals of how these circuits give us our computing capability and how we can role this forward to create intelligence.

circuit

Figure 1.0 – basic electronic circuit.

Everyone surely should remember elementary science experiments to build the basic circuit with a battery, light and switch. Using this concept I want to illustrate how we build up today’s computer and how this logic goes further to develop out what’s equivalent to those neurones and connections that we outlined in our battle with nature.

OR Gate

Figure 1.1 – How the circuit can be configured into a simple logic gate for an OR

Developing the circuit by organising switches and wiring enables us to build basic logic gates.  Logic gates form the basic building block in our computing programmes and computer memory. For those of you that are nor familiar with electronics or computer science its worth spending a few minutes tracing out what the electricity is doing here and how the configuration of circuit can truly enable some rudimentary logic.

AND Gate

Figure 1.2 – How the circuit can be configured into a simple logic gate for an AND

 

NOT Gate.png

Figure 1.3 – How the circuit can be configure to make a NOT logic gate ( note i’ve moved from standard circuit notation to the symbols in order to make it easier to read, the wires quite quickly get messy)

 

Whilst there are many types of  gates, these three basic ones give us the capability to create computer memory to handle 1 bit of data.  i.e. we can store data of a zero or 1 value in our circuit as long as there is power applied to it.  These gates are OR, AND, NOT. If you have time its worth working through the logic here. It does work!!! If you don’t trust me this gate will hold a value of 0 or 1 indefinitely as long as it receives power.

Memory.png

Figure 1.4 – 1 bit memory circuit configured from NOT AND OR gates

Once we can handle 1 bit of memory we can scale the electronics up to handle 1 bit to 32 to 64 bits.  We can then scale this out to use our bits to form bytes of memory.  i.e. a collections of bits to make a byte, which in turn gives our byte of memory, which in turn we add many bytes to. Roughly a byte can handle a character through ASCII representation. ASCII is a basic binary encode method for characters. The ascii code for the letter A in binary is 01100001.  This takes 8 bits or equivalent to 8 memory circuits outlined figure 1.4.

So we now can store this information. However imagine how many circuits would need to be wired to implement one byte of memory.  You can appreciate the wires and switches that would be needed to make this work. Hence the need to scale the circuit down to a miniature format  The first stop on this journey is the transistor, which for me is the most remarkable achievement of the modern electronics age.  The manipulation of quantum electrodynamics(QED) enables us to miniaturise this simple switch through the use of semiconductor material.  This was then further miniaturised into microprocessors.

transistor.png

Figure 1.5 – The transistor and the beginning of miniaturisation

miniturisation.png

Figure 1.6 – Microprocessor magnified showing numerous transistors

But how do we build our computer.  To start we need to focus back on our memory.  The thing with memory is it allows us to remember, it can remember a value, it can remember an instruction and it can remember an address of a memory location.  Each bit i.e. 0 or 1 can be wired back into our computer to allow each value to be fetched and transferred back through the central processing unit for processing.

memoryandbits

Figure 1.7 – shows our memory broken down into bits i.e. the 0 and 1 which in turn are made into bytes. i.e. 01100001.  The memory consist of 256 bytes, in this instance, with memory address location identity in grey with its associated value shown in white.  Each bit is physically wired into the cpu via one cable, this is called the bus.  There are two types of bus, one is the data bus and one is the address bus.  We also have a control bus which is a simple 2 bit “set and enable” bus(set stores and value, enable retrieves a value). The way the memory works is to set a value in a particular address location.  For example if address location is 01100001 (i.e. top of the grey location) in the address bus by applying electric current down the wires with the following configuration, 00100001, then this is sent down the data wires when the control bus is set to 1  i.e.   the 00100001 will then be stored in the circuit wiring for that particular byte of memory and that particular address location.

In short we have a mechanism to set and enable data values linked to an address in memory.  Each memory circuit is made up of logic gates outlined in 1.4. We can then fetch and set data in those logic gates.  This means we can both store information for calculation and store information for instructions.  Two of the most important components of a computer programme.

We can now deal with fetching data and storing data in main memory.  Now we need to lift the lid on the CPU to see what’s going on underneath.

inside the cpu

Figure 1.8 outlines my basic CPU design.  You can see from this diagram our wiring from the address buses, data buses and control buses.  These link out to the right of the diagram back to our main memory outlined in figure 1.7.  Inside the CPU you can see many components;-

  1. CPU Clock
  2. Control Unit
  3. Arithmetic Logic Unit ALU
  4. A bunch of registers

Arguably the most important component of this architecture is the CPU clock, this is the thing that identifies a processors speed, for example 200 ghz (200 billion times per second).  Again like the memory circuit, the clock is made up of logic gates configured in a way to control a pulse of activity around all the circuits.  When the clock says go, the circuits switch and move to process the next instruction. Ultimately all the CPU needs to do is to fetch and execute an instruction. This through no coincidence is called the fetch execute cycle. Figure 1.9 outlines how memory is just made up of instructions each one makes up our programming language. If we can process through this list we can process a programme.

 

memoryprogramme.png

Figure 1.9

The next component of our CPU is the Control Unit again a bunch of logic gates.  The control unit works with the clock to coordinate activity within the CPU.  Its intrinsically linked to a bunch of registers, these registers store data in exactly the same way as memory, but much closer to the CPU.  They are there to store things like the address for the next instruction, the value of the instruction just processed, the address of the memory location to store the result, the values of the two sums to be added together.  The control unit processes what the instruction is.  i.e. Load, Add. Store.  It then uses the registers to store the locations of where the data is to be fetched from stored to etc etc.

 

The final component of our CPU is the Arithmetic logic unit.  This is what does the computation.  In its simplest form it is  giant calculator made up of logic gates.

 

So in order for us to work our CPU needs a programme. This could be as followings;-

Address Location = Data value in binary

01100011 = LOAD (10101101)

01100101 = Address(10011011)

10011011 = 9(00001001)

10011010 = ADD (10111011)

01100011 = LOAD (10101101)

10010111 = Address(10011111)

10011111 = 6 (00000110)

The CPU will fetch the first line with an instruction to LOAD.  Load means nothing other than the binary value 10101101, its an arbitrary instruction that means that the control unit in the CPU will go and load the next instruction in memory into the register.   This instruction happens to be the address of the next value. i.e. 01100101 = Address(10011011).  Which in turns goes to memory location 10011011 = 9(00001001) and gets the value 9.  It then goes on to get the ADD operation which in turn then goes to get the value 6.  The programme is basically asking us to add 9 to 6.

You get the picture, the control unit is manipulating the registers, memory and ALU to process the programme in memory. Using the buses to transfer data using the addresses and using the control buses to set or enable values from memory and registers.  All of which is synchronised and controlled by the CPU clock.  The concept is simple but the speed is the thing that makes us have the ability to processes billions of instructions per second, hence the many wonderful things you can do with today’s computers from spreadsheets to augmented reality.  As a side note the binary numbers are our machine code. In order to make it easier for programmers, instructions are encoded through an interpreter this enables us to create an assembly language i.e. Load, address, add etc, etc,. Each microprocessor as to interpreted from assembly to its own machine code instruction.  Assembly language in turn can be extrapolated further into higher level programming language such as C++, Java or others where we have compilers interpreting our language and mapping it to the processor specifics.  Compilers allow us to write one set of code that then can be interpreted to many different CPU architecture types.

 

moss 6502

Figure 2.0 – MOSS 6502 CPU up close and personal. This is the micro processor that powered the famous Apple II computer.  The top right of the diagram shows the CPU clock. The big block at the bottom shows the control unit and registers, the bit in the middle / near top show the ALU.  The buses are the neat grid lines where the logic gates made up of many layers of mini transistors appear rather messy in design.

So we’ve built a CPU. Lets make it smart

Any Computer scientist knows the basics of the above.  The challenge is to make this intelligent.   How do we do this and what do we have;-

 

  1. Speed to execute and process billions of instructions per second.
  2. The ability to apply logic i.e. encoding rules and knowledge
  3. Ability to re programme ourselves
  4. Ability to process instructions in a parallel way i.e. today’s modern processes have many core i.e. many CPUs that can run in parallel.
  5. Ability to remember on a large scale.

So in essence we have the building blocks of intelligence we just need the programmes to make it work. It’s all about the software engineering!  But with software engineering there are many forms of architecture approach to implement a programme that could deliver intelligence.  A.I. can be considered as have the following schools;-

 

  1. Knowledge Representation
  2. Planning
  3. Natural Language Processing
  4. Machine Perception
  5. Motion and Manipulation
  6. Long term goals
  7. Neural Networks

Each branch of A.I. is exciting in its own right, however for me the area which is closest to mimicking the way our brains work is that of Neural Networks which have been around for sometime.  To be precise they were initially pioneered as mathematical networks based on algorithms using threshold logic.  This was as early as 1943 and was obviously not computational in the modern sense of the word but purely linked to mathematical models to represent inputs, calculations and outputs which in turn are fed into new inputs.  Our brains don’t understand the concept of mathematical notation when we go down to the level of how neurones process electrical signals through the 100s of thousands of connections they develop.  However each connection links to more connections forming a network.  This is our software but a software that is able to reprogramme itself constantly based on calculations, output and environmental inputs. So in order to build A.I. software we must encapsulate the following capability;-

 

  1. Encode rules in a programme
  2. Enable that programme to take input in the form of data and the output of previous calculations
  3. The ability for that programme to score the success of the result of its calculation
  4. The ability for the programme to rewrite itself i.e. encode the rules again to optimise the result.

 

In short, input, calculate, output, learn, reprogramme.  As a developer, to even think of building a universal programme that can rewrite itself is a task in its own right.  It requires logic to understand what to rewrite.  It requires logic to estimate the success of the result, i.e. is it worth rewriting. Moreover it doesn’t even consider the task of building the logic and rules for the particular problem you want to exert intelligence on.  Its all very well building some software to focus on a particular implementation but what about a universal problem solver i.e. one computing mechanism that can be configure or programmed to solve many different types of problems.  Isn’t that the strength of computing in that they are universal computing machines regardless of what they are trying to compute in hand.

 

What is needed is a framework to standardise our approach and as with all things in the universe when it starts to get hard we turn to mathematics to dumb down our thinking!!!  Special Relativity is complex to say the least how dare Einstein simplify it(E=mc2)!

neural network

Figure 2.1

neroneformula

The above equation represents the graph.

Figure 2.1 is a Artificial Network Dependency Graph (ANN)

Typical ANN dependency graph as three types of parameters;-

  1. The interconnection pattern between different layer of neurones
  2. The learning process for updating weights of the interconnections
  3. The activation function that concerts a neurones weighted input to its output activation.

As you can see from the diagram(figure 2.1) we can take an input, apply multiple calculations which in turn can feed into other calculations.  However we have one fundamental component that makes the whole framework work and that is the cost function.  This can be represented using the following formula;-

cost function

The COST function is an important concept in learning, as it is measure of how far away a particular solution is from an optimal solution to the problem being solved. Learning functions search through space to find a function that has the smallest possible cost. This is achieved through mathematical optimisation.

 

The above sounds complicated but what it gives us is a framework that we can use to architect a software platform to provide a level of artificial intelligence.  As a software engineer we can use these principles to encode rules and ensure those rules learn the success of their application.

convergence algorthms

 

Figure 2.2 – Illustrates the principle of the Cost function and the neural network further.

I believe that the principles of this approach will allow us to achieve convergence of algorithmic, big data and parallel computing with the ability to weight the success.  This training principle, if implemented correctly, could overcome the limitation we have with today’s software platforms in achieving intelligence.

However whilst this approach gives us a framework to architect a software programme.  As a software developer the prospect of building such a programme is daunting to say the least. As a development community we feed off each other through technology abstraction.  We all stand on each others shoulders and incrementally we use a collective knowledge to enhance our innovation. This approach and our current time in history makes A.I. achievable and viable for the average commercial application.  However we do need catalysts that could allow us to start using this technology.  Fortunately there are developments on the near horizon that could allow us to take an early opportunity.

 

Emerging usable A.I. Technology

 

Without loosing site of our zero to A.I approach.  We have discussed,so far, the basic mechanics of today’s modern computer.  We have then attempted to represent intelligence learning capability and rules definition through neural networks and a COST function.  All of which can be represented within a software programme outlined in our earlier example,  obviously the scale, size and complexity of this programme would be substantial, however we can see how it can be achieved with time and thought.  Now looking ahead, how can we make it easier for ourselves and certainly more commercially viable.  My view on this is to follow the lead of the wider software development community through open source.  This is ultimately gives us the development scale, intelligence and skills to make A.I. achievable for the average company to get commercial gain from it.

tensorflow

Figure 2.3 – Tensor flow opensource A.I framework

Figure 2.3 illustrate an A.I. framework development by Google as an Opensource project.  Tensor flow provides APIs that allow us to build on the concept of a neural network and cost function to exert intelligence from data and informationary input.

 

Tensor flow history has been controversial, it is a second generation technology with the initial generation coming out of the Google brain project.  Fundamentally a very advanced implementation of neural networking, Tensor flow is one of the most advanced and developed frameworks for A.I. available in the public domain under open source.  No doubt there are equivalent technologies kept under wraps at Apple, IBM, Oracle, Microsoft, SAP and others, however Tensor flow seems to be the most powerful yet. Certainly it is something we can get a head start with.

This library of algorithms originated from Google’s need to instruct  neural networks, to learn and reason similar to how humans do, so that new applications can be derived which are able to assume roles and functions previously reserved only for capable humans; the name TensorFlow itself derives from the operations which such neural networks perform on multidimensional data arrays. These multidimensional arrays are referred to as “tensors” but this concept is not identical to the mathematical concept of tensors, which we will provide more insight on shortly.
The purpose is to train neural networks to detect and decipher patterns and correlations. This framework is available for us now, but what makes it even more appetising is the release last month, May 2016, of a custom built ASIC chip for machine learning and specific tailored for Tensorflow.  This is being neatly named the Tensor Processing Unit. Google revealed they’d been running TPUs inside their data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning.  Because each chip is inherently small in design i.e. high-volume low precision 8 bit arithmetic processor it put the processor in the realm of Internet of Things computing meaning we can use distributed processing for intelligence allowing us to compute learning close to source, i.e sensors, data or sub systems. This is very exciting for us as it provides machine learning on a chip that could be plugged and played anywhere.

So how does it work?

Understanding Tensorflow first all means we need to understand not only how does a neural network work.   We need to understand the concept of Tensors.  So what is a Tensor?

tensor

 

Tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. In terms of a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order (also degree or rank) of a tensor is the dimensionality of the array needed to represent it, or equivalently, the number of indices needed to label a component of that array.

Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as elasticity, fluid mechanics, and general relativity. This for me provides  a complex mathematical framework which Google has gone on to use to represent and map learning. These tensors can sum up the total of learning at a particular point in time.  By enabling this information to flow through a graph of nodes, with a node being a computation, we can allow the tensor to compute information, which it then can update the tensor to improve the information and therefore knowledge.  This concept can be represented and configured in a Dataflow Graph which represents our neural network of data and computations.  Tensor flow also makes use of one other concept which are called edges.  Edges can be seen has input and outputs from a node which can then enable the tensor to role into another computational node.

 

What Tensor flow also gives us is useful tools such as Tensor boards which enables us to manage the complexity of the computations that Tensor flow need to be capable of doing such as training massive deep neural networks.  These do get complex and the architecture behind them needs to be structured to represent both the machine learning process and the data that will need to be utilised not to mention debugging and optimising programmes that get developed.

 

Tensor flow is deeply numerical.  The first applications of this technology is derived around image recognition as its a natural step when dealing with reams of bitmap data.  However the applications are endless.  The key to understanding the application capability is by understanding the potential of your data.  You need to get in the mindset of how a human being can look at data and make conclusion.  What does it mean to me and my business, what decisions can I derive and act on.  Once we have this view then how can I plug in technology that can take the data make the decision, store the learning then use the decision within the rest of our technology.  This for me is the trickiest part because it simply is not worth the effort to use this technology for today’s simple decisions within, say the average e-commerce website.  i.e. this customer bought this and likes these or may want to see this.  This problem is simply too simple, AI requires a real problem that only a human could make a decision on. The truth of the matter is that humans reprogramme themselves quickly to perform new tasks.  Therefore to do this on a routine basis you would need to rebuild your neural network which again is not efficient.  So a real tasks would be the kind of decisions that can be built into a framework or factory of decision making.  For example the type of tasks that require humans to process a constant similar stream of information and make informed decisions and learn from those decisions.  Conversation, sentiment, image recognition, people recognition, footfall, traffic flows, scheduling, testing, performance analysis.  All of which have three types of informations processing;-

 

  1. Data gathering
  2. Data normalisation
  3. Step by Step Data processing to arrive at a decision
  4. Compare that decision, can we improve out model
  5. Can we look at all those micro decisions and abstract not just micro meaning but macro meaning.
  6. Use of that decision.

The hardest thing for AI is finding the right application for it that makes the effort worthwhile.  But when applications can be found and they are perfect for the application of the technology and the benefit to productivity could be exponential.

A.I. Architecture

We’ve hinted over the last few paragraphs the importance of architecture in AI.  It’s worth exploring this in a little more detail by introducing the concept of agents.  Whilst agents in their own right could just reflect the nodes within our environment enabling us to use the nodes for pure computational purposes.  However they could be whole Datagraphs in their own right, each making and learning from their own decisions.  This in turn could come together on a macro level to make even bigger decisions.  The efficiency of this approach allows us to develop intelligence at every level within our architecture.  I think this approach deserves merit and it is something every software architect should start to think about with every component of their enterprise architectures.  Can we make each component smart even through we may not have the ultimate application for AI at this point in time?  For example, take an average ERP platform, which bits could be enhanced to be smart.  Inventory could have some intelligence around stocking and buying patterns.  E-Commerce website can have intelligence around demand, finance systems could be intelligent around wider patterns of information.  External intelligence could be use to feed demand understand sentiment, feel emotion of consumers, spot a fashion emerging. This approach assumes the role of agents that collectively can work together to feed the machine. The machine then has bigger and bigger options to suggest decisions or even generate ideas.  As we develop lines of code we think to ourselves am I being smart in this approach, is it clean, is it efficient?  We need to start asking, can I make this little bit smarter? The key is, it’s is all about the sum of all the parts.

 

One particular application we see for AI in the digital agency world is that of resource planning.  Resource planning of valuable people with valuable skills is an art which to me presents the perfect challenge for A.I.  Project x needs person of skills y and z to start at time t but person y and z are working on project w for the foreseeable future but client f needs project x starting now and commercial team c need project x to generate revenue by time t – yesterday.  Everyone familiar with this problem?  Factor in wider environment factors skills y and z are in demand now, but skill m and j are growing rapidly leading to y and z being potentially redundant by time t + 30. Whilst skill m and j are expensive now by a fact of cost x 2.  This is exactly the challenge we are looking for.  Breaking down the steps of decisions and normalising the intelligence required at a microlevel can allow us to build agents of tensor flows to make decisions at each level based on its wider implications.  The problem is, as humans we tend to forget how we arrive at a decision, we use the language “it feels”.  It feels is for me wrong, logic is right we therefore need to break down each step in our A.I. architecture and derive micro intelligence that can be used by the sum total to arrive a conclusion and present the rational for the solution.  We also need to remember one fact and that is we need to learn, we need to arrive at conclusions to our scenario planning therefore training the system if we arrive at s then that is very bad indeed but at least let the machine know how it got to s and why it is bad.  Training like humans isn’t instinct it is learnt behaviour. It doesn’t “feel like” for a reason.

 

So we are arriving at the end of our journey from zero to A.I.   We now should have the confidence that we have the technology capability of building intelligent machines. But appreciate that the problem for A.I. is the problem i.e. the type of application that we can apply A.I. to.  We should also appreciate that micro steps are the answer and that applying intelligence in small steps will equal to a sum of all parts approach to A.I.  I see the next few years about seeding A.I.  Organisation need to have the awareness that the technology is possible and that we need to start sowing some seeds so when the times comes we can reap the rewards of what will be at some point in the future a rapid industry growth.  The key as always is both imagination and innovation.

 

As part of this series of posts, I’ve prepared some presentations that distill this process of thinking further and allow time for each area of this document to be questioned. I’m hoping to present this in person shortly but also video the outcome.  The fun part of this process is the fact that we are never too far away from the basics of computing and electronics and working though the journey from zero to A.I. reminds us how remarkable computing and electronics is.  To quote Steve Jobs we are truly creating a “bicycle for the mind”.

 

Next Up : The beginnings of Quantum Computing, how we can continue the evolution of the transistor and the Quantum Leap it can provide our current technology capability.

 

 

 

Amazing stories: The Drum catches up with digital agency Amaze as part of 20/2000 visionaries series

Originally spun-out from a university research unit, creative and technical agency Amaze has been pushing the boundaries of digital communication since before the rise of the internet. In the latest of The Drum’s 20/2000 Visionaries series of features, where we mark the 25th anniversary of London digital agency Precedent by celebrating 20 top digital shops founded before 2000, The Drum finds out more about the pivotal moments in Amaze’s history. Read more….

We hate Enterprise too but…..

13 years I’ve been dealing with large platforms in multi-countries and territories.  At the start of each project we say to ourselves how can we do it differently, how can we be more lean, more agile, more flexible, more fluid.  I’m a coder at heart, I’ve recently fallen back in love with the open source world and love hacking away in whatever spare time I have, building apps in an agile free flow manner.  It keeps me up to speed.  I can still code an application (not in the most elegant way, mind) with the best of the geeks in some hackathon in soho or some more exotic climate if I’m lucky.  This is my first instinct, just get shit done.  Don’t faff over architecture, spend money.  I hate layers of bureaucracy that develop in projects.  Why can’t we just sit down and code out functionality in a permanent beta rollout.  That’s surely the future? Its surely something we all aim to achieve? So yes I hate enterprise and everything that gets in the way to slow things down.

But and there is a massive but. I have learnt over the years that the odd software project can come off the rails.  Why? Geeks maybe? Bad management maybe?  The reality lies in the complexity of solutions as they gather momentum.  Whether that’s momentum in terms of the reach of the application such as a global platform, increased functionality, increased numbers of developers churning out code,  increased product stakeholders.  Even the very nature of the application becoming business critical with many moving parts. All of which need to be tamed, this agile constant beta development approach is great in startup mode but like a new puppy they eventually need to be tamed otherwise developments have a tendency of becoming unplanned. Resulting in unmanageable code and an unsustainable solution.  We need to organise the chaos into a controlled ecosystem and as the complexity increases, the modules and lines of code grow, all increasing the management burden which in turn, make the whole approach of getting releases out more rigid as we fight to tame the beast that we have created.

Getting the architecture right at the beginning helps with this process, but with everything, progress and ideas keep flowing all of which challenge this architecture immediately.  We adapt to keep up, but with this adapting, changes the nature or our original intentions and starts to introduce the odd little bit of chaos into the solution.  This chaos is a good thing, it challenges us, it helps progress our thinking, however it needs to be managed.

The real reason why enterprise ends up coming into play is because software development, which running into thousands of lines of code is by very nature complex.  But this is a myth its actually quite simple.  The thousands of lines of code have all been written to keep up with our new fast paced market and our insatiable demands for innovations, ideas and pace.  It’s our market place that is creating the requirement for enterprise, the more ideas we have the more structure we need to keep in place to tame the beast that we could end up creating.

And at the top of the human food chain with the ever increasing ideas is our digital marketeers who want us always to stay ahead of the game.  These beasts could reinvent our efforts every week. The problem is without enterprise thinking we have controls that naturally have to fall in place to protect the investment we have made so far.  On smaller solutions that do not require such heavy lifting at the backend, we can simply throw away and start again if it gets complicated.  This leads to consumable software solutions, which have their benefits and place.  However these are strategic decisions and must be made at the beginning.  Trying to change a large scale business critical solution with a strategic corporate investment into a consumable solution will lead to some very expensive software development cheques.

So what’s the answer.  I think the problem is not about about whether enterprise is a bad or a good thing.  Its about how we design the conveyor belt, how we ensure we can load the hopper of our product roadmap and ensure there is a common understanding of the order of releases.  Sometimes this requires patience, sometimes things have to wait for those larger releases that will benefit someone in your organisation but not necessary our new sexy urgent idea we may be trying to push out. Remember we are trying to bring some order to something that could quickly get so complex and out of control that the only thing we can do is throw it away and start again.  This is the real risk, every I.T. project is only a few scary ideas away from the scrap heap and this costs a lot more money in the long run.

The challenge for us how to we maintain our competitive advantage. A solution to this is a well thought out roadmap thinking ahead about the market, using real metrics about performance, your user base and how best to plan in functionality to keep them satisfied.  Enterprise is a game of chess, its hard to master and takes time.  Each move requires us to think well ahead and plan.  Even if that means planning in your agility. Its not a gain for short term game and requires a lot of patience.

Lists, Lists and Lists

OK a quick blog post,  however I have to say I’m a little frustrated how good discipline is often thrown out the window when it comes down to running technology projects.  Maybe because its because I work in the Digital world, which often as its fair share of people with good ideas but lack of ability to deliver those ideas or see the course through.  A little harsh as most groups of people will have different levels of talents and the best team needs a spread of these talents.  However good uncluttered, regimented discipline is required to close down a project.  Today we are blessed with more tools than ever to manage projects and project closure.  Atlassian being the best of all to enable planning, rescheduling on the fly and above to have a clear view on what we still have to do to close something down.  Above all there are some really simple rules that we should always follow;-

  • Manage lists of tasks
  • Schedules those tasks
  • Keep scheduling those tasks
  • Make sure everything is ready and organised before that tasks is scheduled to start
  • Make sure people know what they are doing before they start
  • Track progress
  • Keep agile and replan to ensure we are efficiently getting around those tasks with the hours we have remaining
  • Aim to close don’t leave areas open
  • Keep information to the highest of quality. Remember shit in means shit out.
  • And above stay on top of it.  We are managers you need to be on top of it all, don’t step away, stay close to the data and stay close to the rhythm and flow of the project.

Atlassian’s Greenhopper planning boards with good burndown charts enable us to do this on a daily basis but we must work with them and make them work for us.  This involves finally conducting your team, removing barriers, creating workarounds.  Its all about timing and ensure we can get from a to b in that window of time.

Anyway back to the lists.

More than half of companies providing e-commerce will look to re platform in the next 24 months

2013 will see significant investment in e-commerce technology and platforms as demand for e-commerce increases and as companies  innovate their multi channel retail offerings. The real winner will be platform providers that provide true commerce suites.   The concept of a commerce suite has evolved out of a driving requirement to combine what use to be very different technology, retail and marketing angles.  Each one of these angles meant that you would end up with a mash up of technologies like content management for your front end promotional and marketing platform, catalog management for your product information and then finally your e-commerce engine to take the order, take payment, check for fraud and pass off to the appropriate fulfilment partner.  To then combine this with all the subsequent technologies that need to be deployed for functionality such as analytics.  This miss match of technologies has meant that it has been difficult for companies to evolve and innovate their e-commerce platforms to take into account different e-trading ecosystems such as being able to work with partners such as Amazon, Apple(app), eBay, Google or even different approaches to commerce as demonstrated in China. It also has prevented them from innovating for different devices, channels and from providing a true omni channel experience between both online and instore worlds.  In addition to this the world of fulfilment has been changing, no longer are companies able to rely on one fulfilment model.

 

Organisations are faced with multiple fulfilment partners, multiple types of fulfilment from traditional ship to consumer models, to ship to store an in store pickup models.  Each require different levels of complexity and even different levels or integration requirements. Today commerce suite platforms provide a one stop platform that bring all this together under one architecture and one technology. Commerce platforms have evolved to combine content management, product information management, order management, customer services, analytics, fulfilment management promotions and campaign capability under one umbrella ecosystem, therefore allowing companies to be more agile and more costs effective when operating their e-commerce infrastructure.  The evolution of these suite of which we rate Hybris as the best is causing companies to think and take on the replatform of their current e-commerce estates.

 

Amaze are seeing a number of our customers and partners now undertaking these projects, however it requires a  unique capability;-

 

1) The right commerce suite platform which has the capability of providing one platform for all commerce requirements from marketing platforms, content management platform, campaign platform, brand platform,  product platform, promotions platform, order platform and finally fulfilment management.

 

2) The right commerce architecture that works with the suite, the fulfilment partners, your e-store design and your multi channel retail strategy.

 

3) The right model that takes into account your consumers, your markets and regions.  But also takes into account economies of scale between your stores.

 

4) The right integration approach that is open and encompass common standards.  Integration needs to be plug play allowing different fulfilment models and partners
5) The right retail control and optimisation approach that works with the site, the commerce suite and the data produced by your consumers to optimise trading, provide feedback and ultimately help provide intelligence to improve sales.
Amaze have developed a unique approach of consortium building, technology capability and the strengths of a traditional agency to provide the necessary one stop approach to re platforming your e-commerce functions. This traditionally is the realm of system integrators, but we have found that, as with the evolution of commerce suites, it requires a new type of business to deal with the challenges of todays e-commerce requirements.
So moving into 2013 we are going to see an 12% increase in online retail growth in the US and Europe but a far bigger percentage increase in e-commerce technology investment.  We will also see agency’s really come to grips with the commerce world with them starting to undertake the challenge of building significant commerce practices.

2013 Predictions

To download the podcast to go with this article click here.

The sky’s are getting greyer and the nights drawing in, the year starts to draw to a close and its time again to stick my neck out and put some technology predictions together for 2013.

So I’m firing these out with no particular research other than gut feel and what I’m seeing in our sector.  Here goes…

Mobile will continue (again) to play part of 2013

In the UK and much of the developed world, 4G will be upon us this year.  This means quicker speeds and more capability from our smart phones.  Mobile browsing will be come even more of a requirement for most of the websites out there.  However mobile apps need to be thought through before investing.  We are getting to the tipping point in the app stores now where your apps simply won’t get found.  Brochureware apps are no longer novelty, so we really need to think about real application and utility. Saying all this if you get the utility right or the entertainment factor right then consumers will make the download.  When setting out on your app journey in 2013 think about two things.  How close are you to your customers and would they value a utility or an entertaining app from you on their precious phone real estate? If you get this wrong then you are going to be throwing the investment away.

On the other hand mobile web and mobile web browsing will be major.  If your websites not mobiled up for smart phones then you are going to be singled out and branded slow to adopt.  We know most smartphones can zoom browse and therefore full screen browsing works, however consumers are expecting a fat finger touch experience as the first point of entry, they will switch to your classic site after that.

Also for 2013, here in London, the tube is starting to get wifi-ed up, again increased connectivity such as wifi on planes, the underground and trains will create a pull for mobile users to consume content and that content will often be video. From a developers perspective we need to keep embracing HTML5 and progressive enhancement.  The industry will need to keep pushing the boundaries,  2012 has seen the launch of more html5 compliant websites with richer assets all of which are putting pressure on current website performance however we need to persevere because bandwidth will keep up.  It won’t be quite like moores law but it will improve, so keep innovating.

Tablet Revolution will continue

This is such an obvious one but will continue into 2013. I will explain later how Windows 8 will help buoy this.  Apple launched iPad mini yesterday, Amazon will follow with the fire and I’m left explaining to my colleagues why its all a good idea.  Surely two different sizes of tablet, how can than be a benefit, how will they sell, surely they are cannibalising their other markets in the case of Apple? The answer is handbags, all different sizes but they are all needed and we(well not me) will have more than one. I think Apple have got it right and I’m determined to have all three sizes.

“Connected Big Data”

I’m going to claim this term first before the rest of the tech world gets it.  We have seen big data growing in significance, like cloud computing in 2011, its a buzz word that means something to tech consultancy firms, but what does it really mean to our customers.  My view on this, is we are really starting to see the emergence of connected big data.  i.e. a lots of our customers are starting to join the dots between silos of data and systems with a single purpose to unify around their customer or consumer at the web layer.  The silos of data may be getting joined up globally or just with the sole purpose of providing unified information. We are seeing unification of product data, consumer data, crm data, analytical data and general content all of which require connected specialist with the ability to consult across all levels within an organisation. We typically see four core  data hubs occurring within most global companies;-

1) Product data and enriched product data

Here we are seeing platforms like SAP or Oracle working in collaboration with enrichment tools such as digital asset management platforms and content management platforms.

2) Customer and CRM data

Here we a seeing a unified customer view where we are connecting the customer with common data sources and we are joining the dots between data silos.  Starting with a single identity for a customer.  i.e. how do we create a identity passport and then map the users CRM footprint to that passport as they interact with your brand either via your website or on in the social and mobile worlds.

3) Transaction Data and Analytics

We are seeing even more sophisticated data mining techniques with business intelligence technology fed back to the web layer to optimise user experience and customer engagement.

4) Centralised and localised content

Traditionally the home of the enterprise content management platforms.  However we are seeing architecture and approaches challenging the dominance of these platforms.

So connected big data compared with just big data, sees the joining of dots at a global level between systems with big data silos such as SAP to surface that data to the web layer and allow web layer users to contribute and participate in that data as opposed to just surfacing that data for analytical purposes.

Social

Will the growth in social continue?

Yes most definitely but are we seeing increased demand to play in this area?  I believe we are seeing a plateaux in innovation, which is starting to slow down social network innovation, this means there is less opportunity for growth.  I think it will be still a major part of any digital agency’s portfolio but I think there is a level of maturity beginning to emerge.  Saying this I think we are ahead of the curve and there is still significant motion in this sector with lots of organisation now getting it and starting to put money into social for customer engagement, marketing and application.  So it will still be a big part of 2013 and certainly a time for agency’s to capitalise on the hype.  However innovation is required to lead the pack.  Facebook will continue to grow, but more slowly.  Verticalised social networks like pinterest will also see growth however there will be common sharing standards and approaches starting to emerge such a sign-on protocols like facebook connect.

The biggest advance in innovation will be context sensitive social.  It will be used to drive likes, connect friends and customise information.  Organisation that develop good delivery platforms for this from both a technology and campaign perspective will be able to take most advantage of this market as it starts to mature. Finally watch out for revenue drive innovation from the both Twitter and Facebook they have to do something in 2013.

Will Windows 8 make an impact?

Is Microsoft really on the decline, is Apple’s position now dominant for the next 10 years? As we eagerly await the launch of windows 8, are we really expecting a fundamental relaunch of good old microsoft?  Well Microsoft are expecting to spend big to push this one out.  I think it will start to make an impact on businesses with I.T. departments still hanging onto a level of control and still trying to hold off the day when they can no longer resist their staff enjoying a dose of Apple. However it certainly won’t be a revolution as we saw in 95. We will see interest regenerated by some new sexy devices, even some nicely designed tablet hybrids that will catch on.  But don’t expect too much.

What it will do is start to standardise the tablet and touch.  I know Apple, Samsung and practically every other vendor supports gesture and touch capability and have been doing it for some time. But with Microsoft coming online will mean we will start to see the standard fully adopted. Navigation of apps and web browsing will need to ensure touch is at the core of their design from now on. This will mean innovation in design, html5 and our favourite, Javascript.

Open Source at the Server

Open source is continuing grow and its now proving itself with organisations and in big projects.  One winner in this is Drupal which is rapidly becoming enterprise capable.  This will continue and Drupal could start to rival some big content management vendors. Watch this one closely for developments. We certainly are.

Content Management Platforms

This leads me on to platforms and CMS platforms in particular.  Our favourite has been Tridion for many years.  We feel it will be a push for Tridion to be knocked from its well earned enterprise content management leader. However the ones to watch and not ignore in 2013 are;-

  • Sitecore
  • Drupal
  • Adobe CQ
  • EpiServer

Amaze are developing strategies and approaches to ensure our customer can pick wisely.

E-Commerce Platforms

E-commerce continues to grow and grow.  Choice of platform remains critical and its becoming an increasing requirement as businesses start to look at second generation commerce capability.  Obviously depending on your size will depend on your choice of platform.  But if you are in the enterprise bracket i.e. you are operating a global platform or have substantial business being run through e-commerce then we still would recommend Hybris as number one.  Why? Because it is more complete than any of the other big vendors. i.e. you get more to start with; for example product information platform, e-commerce accelerators, customer support, content management and mobile.  This therefore leads to less complex software integration programmes and less risk.  It also means you can get to market quicker. We feel its still number one and one to embrace in 2013.

Content Delivery Networks

Is a big theme for infrastructure going into 2013.  There are still a lot of companies that have not even looked into this let alone deployed CDN.  With content increasing and globalisation becoming an ever increasing factor content delivery networks are becoming a necessity.  There are only a few key players that do this well;-

  • Akamai
  • MaxCDN
  • Edgecast
  • Amazon
  • Rackspace

So happened to?

Finally I just wanted to wrap up by revisiting last years buzz word and trends that I have not mentioned here.

HTML5 – adoption continues.  Yet still not a ratified standard but the industry is ploughing ahead. It must continue to be embraced.

Cloud – interest and adoption growing more slowly than the hype, however its here to stay and will continue to be a big factor in any big infrastructure project.  Microsoft Axure has been slowest to succeed whilst Amazon is trail blazing.  What we are seeing is growth in vendors starting to provide onDemand services to their traditional software license models.  This is good news for the industry.

Making our developments more efficient

Podcast available for download with this article click here.

We’ve been developing software and web solutions now since 96. Our development processes have changed throughout the last decade, however our fundamental development approach has always been waterfall and the tools and techniques we’ve been using have been driven by traditional software and project management methodologies.

In the last five years Agile has been a big catalyst for change in our industry. It promised speed, agility and flexibility and above it may hold the golden ticket to increase productivity. However the struggle still remains that Agile only works efficiently if your customer trusts you implicitly and is bought into the concept of time and materials. This always remains a challenge as project budgets remain finite and it will continue to remain a leap of faith for any customer buying a web solution or software. But we persist, we love agile and we believe it can give us the necessary productivity and efficiency gains. We just need to wrap it within a traditional fixed price process.

So this is what we are doing

All our development still remains waterfall. i.e. you have a start, you have fixed phases and then you have a delivery. Equally you have capped and managed budgets and delivery dates. Our customers know what they getting and when they are getting it. However when we dig deeper into the phases we use a mixture of true agile for development and a colloaboration driven approach to get the project into development. I’ll start with the latter to explain how we begin the journey;-

Starting the project is key and starting any project well sets the foundations for its success or not. There are a number of things that we must get right here…

1) Project control and governance

The plan and budget are the key fundamental controls in any project. The hardest thing to do is set out a framework plan for the project when you haven’t begun your requirements capture or discovery. However this is what we do first. Immediately as we begin discussions with our customer we use our project planning process to itemise the solution. What we mean by this, is that we use our plan to capture all the discussions we need to have about how the solution evolves, we capture what needs to be discussed, designed and who needs to participate. We then set targets by when these tasks need to be completed, not how long they will take. We set a target based on experience and then we go for it. This forms the basis of our design phase. There are technical design(TDs) activities or create/ux (UXs) design activities. We then document and run workshops and design sessions to hit the targets we set. The plan then goes on to form the governance of the project to ensure we don’t loose anything and we have the appropriate targets and budget controls. As we continue our design and specification process we then are able to evolve our build estimates, team size and run order. Governance goes on to cover more aspects of the project which I will discuss later but starting it right is our biggest governance step.

2) What are we delivering, what is our specification?

As I discussed above we use the plan to itemise the project. Its worth discussing itemising a project and what we actually mean. Itemisation means all the discussions about bits of the system, design, questions and answers that need to be dealt with to start to specify the solution. By itemising the project what we are doing is starting to create a framework for our requirements to come together. Each discussion can be grouped into functional and non functional requirements under our TDs or UXs items. We then start to structure our specification but this is where things have changed more recently. Our specification is a collaborative knowledge base that our partners, customers and suppliers can all jointly work on to design and specify the itemised elements of the solution. The knowledge base we use for all our projects is Atlaissions Confluence it gives us the necessary collaboration capability to build requirements, document them and database them. At the end of the phase we still sign off before we begin build. But the result is a collaborative knowledge base that supports the project and gives the customer and the team a clear view of what we are building.

Next up is the build

This is where we embrace agile. We know what we are building and we know how long we have. So we are able to define how many sprint windows we have. The first thing to set is your sprint window time. We recommend two week sprint windows, this gives ample time for the dev team to deliver something, but not too much time that we can’t monitor progress against objectives.

Once the sprint cycle length is defined we can then start to do our planning. To do this we revert to our specification(knowledge base) and look to break the user stories out. The team then plans around the user stories. This is a team approach using the agile methodology. But essentially all the team are doing is loading a hopper of work for each sprint cycle to then execute, based on skill, performance and ability. Again we turn to Atlaissions Greenhopper product to plan the sprints, load the hoppers and monitor performance against those hoppers. Greenhopper gives us the necessary repository of historical performance, burn down tools and planning boards to do this collectively and efficiently within the team. Here the Kanban approach provides the necessary collective organisation of the team and their status reporting against task.

So during the sprint cycles we can back off the plan a little and just report progress against sprint. We don’t need to worry about too much detail in your project plan, just key milestones and the sprint cycles and performance against those sprint cycles, the software does the rest.

During the project we use continues integration on our builds so we manage the build of the software and source control. We are also starting to look at automated functional testing. Here we are looking at platforms such as Teamcity and Selenium testing. We need to do more work to standardise our approach across all our technologies, however automation is key to improve quality and keep the code base together.

Another area we are also addressing is the concept of review boards for peer reviewing of checked in code on projects. This sounds bureaucratic, however it is essential to sanity check code but also to provide knowledge sharing across the whole solution. We haven’t quite got a full solution rolled out but its something we are definitely looking to apply to our next round of projects.

Testing

Often over looked and left to the last minute to plan for.  Amaze’s approach is to wrap testing into all phases of the project.  The test plan starts up at the specification phase of the project.  We focus on acceptance criteria and unit testing with the final wave of alpha and beta testing being performed by independent teams either within Amaze or externally via our partners. Again we turn to Atlaisson and the Jira platform to manage issues as they are raised and to ensure the correct fix, deploy and re test cycle is provided.  It is also key that all our projects are fully tested independently for security, cross platform/ device, performance and load testing and any compliance requirements such as QCI.

Project Governance

Finally we turn back to overall project governance and programme management again.  We mentioned it at the beginning of this article however it is the most critical function of any project. Failure to build the right set of principles into the project governance leads to projects loosing control and often overrunning.  So these are our principles..

1) The team own the solution and the project not the project manager.  The project manager must ensure joint and joined up ownership is maintained throughout on the project.

2) Start each week as if its the last.  Keep the pace up on the project and ensure we are delivering every step of the way.  There’s not time for last minute saloon on any project this is not University.

3) Budgets and hours burn are key.  Make sure we are not wasting time and we look at productivity throughout.  This includes quick decision making, collective ownership, delegation. Don’t be afraid to off load heads from your project if they are slowing down decision making or complicating things. Smaller teams can deliver much quicker.

4) The plan must be target driven not reflective.  We need to drive ownership and weekly deliverables using the plan.

5) Use the tools to report status, removing the need for reports and spreadsheets that are often not read.

6) Ensure smooth an open communication across the whole team and your customer.  There should never be surprises.

7) Deal with problems never put you head in the sand.  If somethings difficult it is most likely a problem that is not going to go away the quicker its dealt with the lesser the impact.

8) See the big picture.  Its not next weeks onsite discovery you need to just worry about, its the end game, the live project.  Don’t loose sight of what we are here to do.

So to conclude

Agile is not everything and will not fix the majority of projects out there, particularly the fixed price ones.  Its not a magic wand, however it has brought about some good tools and collaborative approaches that encourage good shared ownership in the project deliverables.  Each project should be looked at from a ownership perspective, we should look a production values and how we streamline delivery using efficient approaches to both share information and to manage how we assign tasks to the development team.  The teams themselves need to exposed to this thinking and able to contribute in the direction of the project.  Above all we need to stay results driven, very efficient and share ownership in the end game.

Cloud Computing

Cloud computing has certainly gained momentum over the last 12 months.  It has no doubt struck accord with cash strapped businesses.  But our view is cloud computing is too low down the software stack and is predominantly concerned with vitualising platforms.  As more and more businesses compete in this space we see the value of cloud computing moving up the stack and unleashing its service orientated flexibility on the domain of traditional software as a service vendors.  Confusing?  They both operate in the cloud but for pure software as a service vendors to add even more benefits to business they need to reach into organisations.  Clouds will grow tentacles into businesses as the membrane between the traditional I.T. systems and the cloud gets thinner.  What does this mean in real terms?

Data will become the platform but it will extend with its applications into the cloud.  As with the web the data will reside in the cloud along with an app store approach to enterprise applications.  Internal private clouds and infrastructure will become agents to the cloud.  The cloud concept will increase up the value chain where cost savings are only half the reason why people will make the move.

The connected visitor

We are working on some interesting connections between user visits to a website and the development of that user’s digital profile.  What I mean is that a visitor to your website generally takes the form of a hit or visit.  But most users are now starting to connect with your website via some form of authentication or single sign on through their social network accounts. This enables us to build intelligence on that user so we can instantly respond to that user’s likes and needs.

Behind the scenes we can use intelligence to join the dots around that user and look to build a digital footprint that enables us to blend content and navigation around the user.  We can also use this intelligence to  understand the user(taking into account their consent) and segment according to their likes and dislikes.

The next generation website will use a combination of big data and connected intelligence to provide a more tailored browsing experience with content that is both more useful and intuitive.

This type of technology plays dividends when built into e-commerce solutions, both in terms of the templates that display product information but also the dashboard technology that enables e-commerce managers to keep track of the performance of their retail operations. It allows them to have a digital nerve centre monitoring all aspects of the commerce site.

The technology to do this exists in the identity layer of the site. We are seeing the development of SasS based identity management providers that can start to provide connected sign on, intelligence and reporting for your digital estate.  We will report more on this in the next few weeks.