Unlocking the true value of IoT

My latest whitepaper, ‘Unlocking the true value of the Internet of Things (IoT)’ is now available to download. This best practice guide identifies the key opportunities and challenges presented for businesses by the rise of IoT.

The white paper examines how, if approached correctly, IoT can offer a chance to reinvent business processes and transform market offerings. However, recent Forrester research shows that only 34% of organisations agree that IoT enables new types of business models, with many focussing instead on how it can drive operational efficiency.

A combination of agile digital disruptors engaging customers in new ways, and empowered consumers demanding a change to market fundamentals, means that organisations are at a clear junction. They can either embrace the opportunities IoT offers, or risk failure.

The paper outlines the key considerations for IoT success, which are summarised as follows:

Take ownership

IoT implementation will bring up new challenges, not least in connecting business systems across business boundaries. This needs strong companywide collaboration, championed by leadership. No one person or department can plan and implement an IoT strategy alone – nor should they, as for IoT to live up to its potential, organisations need to join the dots between existing silos of information.

Whoever ‘owns’ IoT within an organisation also needs to help drive a cultural shift to encourage departments – from marketing and ecommerce to IT and HR – to innovate and work collaboratively towards realising a holistic customer view.

 Innovate, innovate, innovate

The average lifespan of a company is now just 15 years according to the S&P index. The main reason for this? Failure to embrace innovation. Kodak is a classic example here. Although it is the company responsible for inventing the digital camera, ironically its failure to embrace it led to the company’s downfall.

When it comes to IoT, the majority of organisations are not yet looking beyond getting products to turn on and off remotely, partly because it is the most obvious and partly because larger, well-established organisations need to prove the business case first and so start slowly.

However, agile start-ups can, and are, moving very quickly to seeing how products can work within a much larger connected eco-system and consumer demand for this is increasing. It is hugely important for organisations to look further ahead and carve out time to be innovative and really think through exactly what they want to achieve with IoT, where they want to go with it and how they’re going to get there.

 Embrace changes in product and service design

Part of the reason that taking time to innovate is so important is that the advent of IoT is having an enormous impact on both product and service design – and it is these changes that will drive the commercial success of IoT. As the connected devices that make up IoT become more prevalent, the lines between software and hardware will continue to blur. In effect, products could improve over time rather than vice versa.

This turns current models on its head, with products being more of a way to deliver services than about the actual product itself. Everything will potentially be a service and therefore offer new revenue streams. The challenge will be to produce products that can adapt and learn from the user to continuously offer relevant and meaningful services.

Intelligent use of data to gain a single customer view

As business models shift to subscription based approaches, accessing and exploiting customer data will be absolutely key. To encourage customers to sign up to and continue using subscription based services, organisations will need to know them extremely well, in order to repeatedly anticipate their needs and maximise their experience. For this, organisations will need to understand how their customers consume their services, by tracking their behaviours across channels over time.

Customers already increasingly expect a personalised, seamless experience across all digital touch points and the increase of connected devices on IoT will only compound this expectation. Fortunately, IoT will also provide a goldmine of customer data – for those organisations that can successfully collect and analyse it at least.

Build on your current investment

The technology to enable connected devices and IoT for organisations is, generally, all in place. The majority of organisations have a digital ‘hub’ encompassing mobile, commerce, CMS, web, analytics, bespoke apps etc. This is good news for many organisations, as it means they do not necessarily need to re-invent their digital estate. Indeed, an organisation’s CMS system is often the only single source of truth amongst all digital assets and content within a business.

It is a vital component within a digital ecosystem that can handle scale and load and is the only platform that is capable of managing a business’ brand experience. In addition, it’s agile and is already gathering intelligence – why on earth would you start again? Instead, build on this investment.

It’s all about the cloud

The cloud is absolutely key in terms of enabling IoT, as it can facilitate not only the integration of the aforementioned technologies, but also the exchange of data that is so central to IoT. It is not only a very efficient hosting environment, it is also possibly the only affordable way to realise IoT.

When building on their current assets, organisations need to ensure they invest in cloud-based infrastructure in order to reap the rewards IoT can enable. Think of the cloud as the brains and control system for the new IoT ecosystem – without it, there is no glue to bind all the devices and platforms together.

Matt Clarke, Chief Technology Officer at Amaze, comments: “IoT is set to be more disruptive and far reaching than many realise. If approached correctly, it can transform entire business models and lead to unprecedented competitive advantage. It can revolutionise customer experience, streamline operations, completely alter product and service design and deliver new business models and revenue streams.

“All the building blocks are in place for organisations to capitalise on IoT – technology is not the issue; strategy and culture are. That is why it is vital that organisations claim ownership of the IoT agenda and drive the change needed to join the dots between their digital functions, not just look at IoT as a way to streamline processes or cost savings.

“Put bluntly, those that do not, will fail to reap the rewards offered by IoT. Businesses are at a clear junction – they can either make changes to improve their market share going forward, or stick to existing models and risk failure.

“However, organisations do need to appreciate that success in IoT will take time, and that they may not see rewards straight away. That doesn’t mean that they won’t eventually. The Internet of Things is a brave new world for all involved, but there is no doubt its age is dawning. IoT offers opportunities that haven’t even been imagined yet, but to be able to deliver on its promise, businesses need to be brave and take that step out into the unknown.”

IOT_whitepaper.

Zero to A.I.

Artificial Intelligence has long been the preserve of science fiction.  However towards the end of last year I was asked to do my usual trends lecture on the upcoming 12 months.  It struck me during this preperation that for the first time I could see the possible emergence of A.I. capability in the very near future.  This is not because recent advances with personal digital assistance such as Apple’s Siri or the fact that Facebook has started using pretty innovative algorithms to exert learning from its social network community or even the number of acquisitions we are seeing amongst silicon valley’s elite.  To understand why I have arrived at such a conclusion we need to step back and answer a few fundamentals.  Moveover we need to examine the evolution of computing and how it can be leveraged to bring intelligence to its current mere data processing role.

 

Lets begin this journey.

 

216 years ago Michael Faraday started to understand the properties of the electron and in particular the effects of electromagnetism. 1897 Sir Joseph John Thomson first proved the concept of the electron being a particle.  Probably the most defining moment that enabled the birth place of modern electronics and computing.  Without the properties of the electron and how electromagnetism is conveyed by these fundamentals of quantum mechanics we could not possibly even begin to develop computers that are able to manipulate electronics in such as way as to develop out a stored programme.  Of course Charles Babbage proved our ability to mechanically create a stored computer programme through his invention of the Difference Machine in 1822, however without JJ Thomson’s discovery Charles Babbage would have been left fashioning new gears for his machine at a faster rate than the Saturn V burns rocket fuel, if it were to stand any chance of computing the simplest of spreadsheet formulas.  Clearly the electron was the only way to go. It was then over to the likes of Alan Turing, John Bardeen, Walter Brittain and William Shockley, to name but a few of the great men, to continue the journey of electron and electromagnetic manipulation to evolve into today’s computer.

 

This allowed us to get this far, which I will pick up later in this article, however if we are to even begin to understand how we could possibly turn today’s modern computer into anything near intelligent we must first answer a few basic questions and ponder a few basic thoughts?

So first and foremost how do we define intelligence?

There, indeed, maybe many ways to answer this simple question including one’s capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. Alone this seems like an impossible list for today’s computing to grapple with.  But focusing on the key words of logic, communication, memory, planning, problem solving all are great traits of today’s modern computer. So we may have something to work with after all.  Moreover another view is the ability to perceive information and retain it as knowledge to be applied towards adaptive behaviour within an environment. In essence the ability for the machine to learn from its environment therefore gaining knowledge and perception.

Lets pause on this for a second, if we can perceive information, learn from an environment, extract knowledge, problem solve, communicate, process logic, then there are untold applications for today’s digital world. Consider big data alone and what this means to our understanding when processing large amounts of visits and or transactions on the average digital estate of the average bluechip. This is big! This is very big for our world?

 

But how do computers compete with humans?

Take our brains which are twice as large as our nearest intelligent competitor i.e. the Bottlenose Dolphin. Three times as large as a Chimpanzee. Our brains are arguable the most complex single object in the known universe.  85 billion neurone cells plus another 85 billion other cell types. Each neurone are electronically excitable cells with between 10, 000 to 100,000 connections creating neurone networks that train the flow of electrons to encode our memory and intelligence. In essence the training and building of these networks forms our software.

 

85 00 000 000 000 000 neuron connections

vs

4 00 000 000 000 stars in our galaxy

Pretty complex to say the least.

 

You could argue nature 1 vs Electronics 0

 

However consider today’s most powerful microprocessors on average we are looking at

100 000 000 000 transistors (switches)

vs

85 00 000 000 000 000 human neurone connections.

 

But obviously a switch cannot compete with a neurone and its connections each of which is comparable to a software programme in its own right, however when you factor timescales.

Nature 30 billion years

vs

Electronics 216 years

You could argue it’s more than achievable and is just a case of when rather than if.

After all each modern computer is just a bunch of circuits operating at high speed.

Back to Basics

Lets see the fundamentals of how these circuits give us our computing capability and how we can role this forward to create intelligence.

circuit

Figure 1.0 – basic electronic circuit.

Everyone surely should remember elementary science experiments to build the basic circuit with a battery, light and switch. Using this concept I want to illustrate how we build up today’s computer and how this logic goes further to develop out what’s equivalent to those neurones and connections that we outlined in our battle with nature.

OR Gate

Figure 1.1 – How the circuit can be configured into a simple logic gate for an OR

Developing the circuit by organising switches and wiring enables us to build basic logic gates.  Logic gates form the basic building block in our computing programmes and computer memory. For those of you that are nor familiar with electronics or computer science its worth spending a few minutes tracing out what the electricity is doing here and how the configuration of circuit can truly enable some rudimentary logic.

AND Gate

Figure 1.2 – How the circuit can be configured into a simple logic gate for an AND

 

NOT Gate.png

Figure 1.3 – How the circuit can be configure to make a NOT logic gate ( note i’ve moved from standard circuit notation to the symbols in order to make it easier to read, the wires quite quickly get messy)

 

Whilst there are many types of  gates, these three basic ones give us the capability to create computer memory to handle 1 bit of data.  i.e. we can store data of a zero or 1 value in our circuit as long as there is power applied to it.  These gates are OR, AND, NOT. If you have time its worth working through the logic here. It does work!!! If you don’t trust me this gate will hold a value of 0 or 1 indefinitely as long as it receives power.

Memory.png

Figure 1.4 – 1 bit memory circuit configured from NOT AND OR gates

Once we can handle 1 bit of memory we can scale the electronics up to handle 1 bit to 32 to 64 bits.  We can then scale this out to use our bits to form bytes of memory.  i.e. a collections of bits to make a byte, which in turn gives our byte of memory, which in turn we add many bytes to. Roughly a byte can handle a character through ASCII representation. ASCII is a basic binary encode method for characters. The ascii code for the letter A in binary is 01100001.  This takes 8 bits or equivalent to 8 memory circuits outlined figure 1.4.

So we now can store this information. However imagine how many circuits would need to be wired to implement one byte of memory.  You can appreciate the wires and switches that would be needed to make this work. Hence the need to scale the circuit down to a miniature format  The first stop on this journey is the transistor, which for me is the most remarkable achievement of the modern electronics age.  The manipulation of quantum electrodynamics(QED) enables us to miniaturise this simple switch through the use of semiconductor material.  This was then further miniaturised into microprocessors.

transistor.png

Figure 1.5 – The transistor and the beginning of miniaturisation

miniturisation.png

Figure 1.6 – Microprocessor magnified showing numerous transistors

But how do we build our computer.  To start we need to focus back on our memory.  The thing with memory is it allows us to remember, it can remember a value, it can remember an instruction and it can remember an address of a memory location.  Each bit i.e. 0 or 1 can be wired back into our computer to allow each value to be fetched and transferred back through the central processing unit for processing.

memoryandbits

Figure 1.7 – shows our memory broken down into bits i.e. the 0 and 1 which in turn are made into bytes. i.e. 01100001.  The memory consist of 256 bytes, in this instance, with memory address location identity in grey with its associated value shown in white.  Each bit is physically wired into the cpu via one cable, this is called the bus.  There are two types of bus, one is the data bus and one is the address bus.  We also have a control bus which is a simple 2 bit “set and enable” bus(set stores and value, enable retrieves a value). The way the memory works is to set a value in a particular address location.  For example if address location is 01100001 (i.e. top of the grey location) in the address bus by applying electric current down the wires with the following configuration, 00100001, then this is sent down the data wires when the control bus is set to 1  i.e.   the 00100001 will then be stored in the circuit wiring for that particular byte of memory and that particular address location.

In short we have a mechanism to set and enable data values linked to an address in memory.  Each memory circuit is made up of logic gates outlined in 1.4. We can then fetch and set data in those logic gates.  This means we can both store information for calculation and store information for instructions.  Two of the most important components of a computer programme.

We can now deal with fetching data and storing data in main memory.  Now we need to lift the lid on the CPU to see what’s going on underneath.

inside the cpu

Figure 1.8 outlines my basic CPU design.  You can see from this diagram our wiring from the address buses, data buses and control buses.  These link out to the right of the diagram back to our main memory outlined in figure 1.7.  Inside the CPU you can see many components;-

  1. CPU Clock
  2. Control Unit
  3. Arithmetic Logic Unit ALU
  4. A bunch of registers

Arguably the most important component of this architecture is the CPU clock, this is the thing that identifies a processors speed, for example 200 ghz (200 billion times per second).  Again like the memory circuit, the clock is made up of logic gates configured in a way to control a pulse of activity around all the circuits.  When the clock says go, the circuits switch and move to process the next instruction. Ultimately all the CPU needs to do is to fetch and execute an instruction. This through no coincidence is called the fetch execute cycle. Figure 1.9 outlines how memory is just made up of instructions each one makes up our programming language. If we can process through this list we can process a programme.

 

memoryprogramme.png

Figure 1.9

The next component of our CPU is the Control Unit again a bunch of logic gates.  The control unit works with the clock to coordinate activity within the CPU.  Its intrinsically linked to a bunch of registers, these registers store data in exactly the same way as memory, but much closer to the CPU.  They are there to store things like the address for the next instruction, the value of the instruction just processed, the address of the memory location to store the result, the values of the two sums to be added together.  The control unit processes what the instruction is.  i.e. Load, Add. Store.  It then uses the registers to store the locations of where the data is to be fetched from stored to etc etc.

 

The final component of our CPU is the Arithmetic logic unit.  This is what does the computation.  In its simplest form it is  giant calculator made up of logic gates.

 

So in order for us to work our CPU needs a programme. This could be as followings;-

Address Location = Data value in binary

01100011 = LOAD (10101101)

01100101 = Address(10011011)

10011011 = 9(00001001)

10011010 = ADD (10111011)

01100011 = LOAD (10101101)

10010111 = Address(10011111)

10011111 = 6 (00000110)

The CPU will fetch the first line with an instruction to LOAD.  Load means nothing other than the binary value 10101101, its an arbitrary instruction that means that the control unit in the CPU will go and load the next instruction in memory into the register.   This instruction happens to be the address of the next value. i.e. 01100101 = Address(10011011).  Which in turns goes to memory location 10011011 = 9(00001001) and gets the value 9.  It then goes on to get the ADD operation which in turn then goes to get the value 6.  The programme is basically asking us to add 9 to 6.

You get the picture, the control unit is manipulating the registers, memory and ALU to process the programme in memory. Using the buses to transfer data using the addresses and using the control buses to set or enable values from memory and registers.  All of which is synchronised and controlled by the CPU clock.  The concept is simple but the speed is the thing that makes us have the ability to processes billions of instructions per second, hence the many wonderful things you can do with today’s computers from spreadsheets to augmented reality.  As a side note the binary numbers are our machine code. In order to make it easier for programmers, instructions are encoded through an interpreter this enables us to create an assembly language i.e. Load, address, add etc, etc,. Each microprocessor as to interpreted from assembly to its own machine code instruction.  Assembly language in turn can be extrapolated further into higher level programming language such as C++, Java or others where we have compilers interpreting our language and mapping it to the processor specifics.  Compilers allow us to write one set of code that then can be interpreted to many different CPU architecture types.

 

moss 6502

Figure 2.0 – MOSS 6502 CPU up close and personal. This is the micro processor that powered the famous Apple II computer.  The top right of the diagram shows the CPU clock. The big block at the bottom shows the control unit and registers, the bit in the middle / near top show the ALU.  The buses are the neat grid lines where the logic gates made up of many layers of mini transistors appear rather messy in design.

So we’ve built a CPU. Lets make it smart

Any Computer scientist knows the basics of the above.  The challenge is to make this intelligent.   How do we do this and what do we have;-

 

  1. Speed to execute and process billions of instructions per second.
  2. The ability to apply logic i.e. encoding rules and knowledge
  3. Ability to re programme ourselves
  4. Ability to process instructions in a parallel way i.e. today’s modern processes have many core i.e. many CPUs that can run in parallel.
  5. Ability to remember on a large scale.

So in essence we have the building blocks of intelligence we just need the programmes to make it work. It’s all about the software engineering!  But with software engineering there are many forms of architecture approach to implement a programme that could deliver intelligence.  A.I. can be considered as have the following schools;-

 

  1. Knowledge Representation
  2. Planning
  3. Natural Language Processing
  4. Machine Perception
  5. Motion and Manipulation
  6. Long term goals
  7. Neural Networks

Each branch of A.I. is exciting in its own right, however for me the area which is closest to mimicking the way our brains work is that of Neural Networks which have been around for sometime.  To be precise they were initially pioneered as mathematical networks based on algorithms using threshold logic.  This was as early as 1943 and was obviously not computational in the modern sense of the word but purely linked to mathematical models to represent inputs, calculations and outputs which in turn are fed into new inputs.  Our brains don’t understand the concept of mathematical notation when we go down to the level of how neurones process electrical signals through the 100s of thousands of connections they develop.  However each connection links to more connections forming a network.  This is our software but a software that is able to reprogramme itself constantly based on calculations, output and environmental inputs. So in order to build A.I. software we must encapsulate the following capability;-

 

  1. Encode rules in a programme
  2. Enable that programme to take input in the form of data and the output of previous calculations
  3. The ability for that programme to score the success of the result of its calculation
  4. The ability for the programme to rewrite itself i.e. encode the rules again to optimise the result.

 

In short, input, calculate, output, learn, reprogramme.  As a developer, to even think of building a universal programme that can rewrite itself is a task in its own right.  It requires logic to understand what to rewrite.  It requires logic to estimate the success of the result, i.e. is it worth rewriting. Moreover it doesn’t even consider the task of building the logic and rules for the particular problem you want to exert intelligence on.  Its all very well building some software to focus on a particular implementation but what about a universal problem solver i.e. one computing mechanism that can be configure or programmed to solve many different types of problems.  Isn’t that the strength of computing in that they are universal computing machines regardless of what they are trying to compute in hand.

 

What is needed is a framework to standardise our approach and as with all things in the universe when it starts to get hard we turn to mathematics to dumb down our thinking!!!  Special Relativity is complex to say the least how dare Einstein simplify it(E=mc2)!

neural network

Figure 2.1

neroneformula

The above equation represents the graph.

Figure 2.1 is a Artificial Network Dependency Graph (ANN)

Typical ANN dependency graph as three types of parameters;-

  1. The interconnection pattern between different layer of neurones
  2. The learning process for updating weights of the interconnections
  3. The activation function that concerts a neurones weighted input to its output activation.

As you can see from the diagram(figure 2.1) we can take an input, apply multiple calculations which in turn can feed into other calculations.  However we have one fundamental component that makes the whole framework work and that is the cost function.  This can be represented using the following formula;-

cost function

The COST function is an important concept in learning, as it is measure of how far away a particular solution is from an optimal solution to the problem being solved. Learning functions search through space to find a function that has the smallest possible cost. This is achieved through mathematical optimisation.

 

The above sounds complicated but what it gives us is a framework that we can use to architect a software platform to provide a level of artificial intelligence.  As a software engineer we can use these principles to encode rules and ensure those rules learn the success of their application.

convergence algorthms

 

Figure 2.2 – Illustrates the principle of the Cost function and the neural network further.

I believe that the principles of this approach will allow us to achieve convergence of algorithmic, big data and parallel computing with the ability to weight the success.  This training principle, if implemented correctly, could overcome the limitation we have with today’s software platforms in achieving intelligence.

However whilst this approach gives us a framework to architect a software programme.  As a software developer the prospect of building such a programme is daunting to say the least. As a development community we feed off each other through technology abstraction.  We all stand on each others shoulders and incrementally we use a collective knowledge to enhance our innovation. This approach and our current time in history makes A.I. achievable and viable for the average commercial application.  However we do need catalysts that could allow us to start using this technology.  Fortunately there are developments on the near horizon that could allow us to take an early opportunity.

 

Emerging usable A.I. Technology

 

Without loosing site of our zero to A.I approach.  We have discussed,so far, the basic mechanics of today’s modern computer.  We have then attempted to represent intelligence learning capability and rules definition through neural networks and a COST function.  All of which can be represented within a software programme outlined in our earlier example,  obviously the scale, size and complexity of this programme would be substantial, however we can see how it can be achieved with time and thought.  Now looking ahead, how can we make it easier for ourselves and certainly more commercially viable.  My view on this is to follow the lead of the wider software development community through open source.  This is ultimately gives us the development scale, intelligence and skills to make A.I. achievable for the average company to get commercial gain from it.

tensorflow

Figure 2.3 – Tensor flow opensource A.I framework

Figure 2.3 illustrate an A.I. framework development by Google as an Opensource project.  Tensor flow provides APIs that allow us to build on the concept of a neural network and cost function to exert intelligence from data and informationary input.

 

Tensor flow history has been controversial, it is a second generation technology with the initial generation coming out of the Google brain project.  Fundamentally a very advanced implementation of neural networking, Tensor flow is one of the most advanced and developed frameworks for A.I. available in the public domain under open source.  No doubt there are equivalent technologies kept under wraps at Apple, IBM, Oracle, Microsoft, SAP and others, however Tensor flow seems to be the most powerful yet. Certainly it is something we can get a head start with.

This library of algorithms originated from Google’s need to instruct  neural networks, to learn and reason similar to how humans do, so that new applications can be derived which are able to assume roles and functions previously reserved only for capable humans; the name TensorFlow itself derives from the operations which such neural networks perform on multidimensional data arrays. These multidimensional arrays are referred to as “tensors” but this concept is not identical to the mathematical concept of tensors, which we will provide more insight on shortly.
The purpose is to train neural networks to detect and decipher patterns and correlations. This framework is available for us now, but what makes it even more appetising is the release last month, May 2016, of a custom built ASIC chip for machine learning and specific tailored for Tensorflow.  This is being neatly named the Tensor Processing Unit. Google revealed they’d been running TPUs inside their data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning.  Because each chip is inherently small in design i.e. high-volume low precision 8 bit arithmetic processor it put the processor in the realm of Internet of Things computing meaning we can use distributed processing for intelligence allowing us to compute learning close to source, i.e sensors, data or sub systems. This is very exciting for us as it provides machine learning on a chip that could be plugged and played anywhere.

So how does it work?

Understanding Tensorflow first all means we need to understand not only how does a neural network work.   We need to understand the concept of Tensors.  So what is a Tensor?

tensor

 

Tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. In terms of a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order (also degree or rank) of a tensor is the dimensionality of the array needed to represent it, or equivalently, the number of indices needed to label a component of that array.

Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as elasticity, fluid mechanics, and general relativity. This for me provides  a complex mathematical framework which Google has gone on to use to represent and map learning. These tensors can sum up the total of learning at a particular point in time.  By enabling this information to flow through a graph of nodes, with a node being a computation, we can allow the tensor to compute information, which it then can update the tensor to improve the information and therefore knowledge.  This concept can be represented and configured in a Dataflow Graph which represents our neural network of data and computations.  Tensor flow also makes use of one other concept which are called edges.  Edges can be seen has input and outputs from a node which can then enable the tensor to role into another computational node.

 

What Tensor flow also gives us is useful tools such as Tensor boards which enables us to manage the complexity of the computations that Tensor flow need to be capable of doing such as training massive deep neural networks.  These do get complex and the architecture behind them needs to be structured to represent both the machine learning process and the data that will need to be utilised not to mention debugging and optimising programmes that get developed.

 

Tensor flow is deeply numerical.  The first applications of this technology is derived around image recognition as its a natural step when dealing with reams of bitmap data.  However the applications are endless.  The key to understanding the application capability is by understanding the potential of your data.  You need to get in the mindset of how a human being can look at data and make conclusion.  What does it mean to me and my business, what decisions can I derive and act on.  Once we have this view then how can I plug in technology that can take the data make the decision, store the learning then use the decision within the rest of our technology.  This for me is the trickiest part because it simply is not worth the effort to use this technology for today’s simple decisions within, say the average e-commerce website.  i.e. this customer bought this and likes these or may want to see this.  This problem is simply too simple, AI requires a real problem that only a human could make a decision on. The truth of the matter is that humans reprogramme themselves quickly to perform new tasks.  Therefore to do this on a routine basis you would need to rebuild your neural network which again is not efficient.  So a real tasks would be the kind of decisions that can be built into a framework or factory of decision making.  For example the type of tasks that require humans to process a constant similar stream of information and make informed decisions and learn from those decisions.  Conversation, sentiment, image recognition, people recognition, footfall, traffic flows, scheduling, testing, performance analysis.  All of which have three types of informations processing;-

 

  1. Data gathering
  2. Data normalisation
  3. Step by Step Data processing to arrive at a decision
  4. Compare that decision, can we improve out model
  5. Can we look at all those micro decisions and abstract not just micro meaning but macro meaning.
  6. Use of that decision.

The hardest thing for AI is finding the right application for it that makes the effort worthwhile.  But when applications can be found and they are perfect for the application of the technology and the benefit to productivity could be exponential.

A.I. Architecture

We’ve hinted over the last few paragraphs the importance of architecture in AI.  It’s worth exploring this in a little more detail by introducing the concept of agents.  Whilst agents in their own right could just reflect the nodes within our environment enabling us to use the nodes for pure computational purposes.  However they could be whole Datagraphs in their own right, each making and learning from their own decisions.  This in turn could come together on a macro level to make even bigger decisions.  The efficiency of this approach allows us to develop intelligence at every level within our architecture.  I think this approach deserves merit and it is something every software architect should start to think about with every component of their enterprise architectures.  Can we make each component smart even through we may not have the ultimate application for AI at this point in time?  For example, take an average ERP platform, which bits could be enhanced to be smart.  Inventory could have some intelligence around stocking and buying patterns.  E-Commerce website can have intelligence around demand, finance systems could be intelligent around wider patterns of information.  External intelligence could be use to feed demand understand sentiment, feel emotion of consumers, spot a fashion emerging. This approach assumes the role of agents that collectively can work together to feed the machine. The machine then has bigger and bigger options to suggest decisions or even generate ideas.  As we develop lines of code we think to ourselves am I being smart in this approach, is it clean, is it efficient?  We need to start asking, can I make this little bit smarter? The key is, it’s is all about the sum of all the parts.

 

One particular application we see for AI in the digital agency world is that of resource planning.  Resource planning of valuable people with valuable skills is an art which to me presents the perfect challenge for A.I.  Project x needs person of skills y and z to start at time t but person y and z are working on project w for the foreseeable future but client f needs project x starting now and commercial team c need project x to generate revenue by time t – yesterday.  Everyone familiar with this problem?  Factor in wider environment factors skills y and z are in demand now, but skill m and j are growing rapidly leading to y and z being potentially redundant by time t + 30. Whilst skill m and j are expensive now by a fact of cost x 2.  This is exactly the challenge we are looking for.  Breaking down the steps of decisions and normalising the intelligence required at a microlevel can allow us to build agents of tensor flows to make decisions at each level based on its wider implications.  The problem is, as humans we tend to forget how we arrive at a decision, we use the language “it feels”.  It feels is for me wrong, logic is right we therefore need to break down each step in our A.I. architecture and derive micro intelligence that can be used by the sum total to arrive a conclusion and present the rational for the solution.  We also need to remember one fact and that is we need to learn, we need to arrive at conclusions to our scenario planning therefore training the system if we arrive at s then that is very bad indeed but at least let the machine know how it got to s and why it is bad.  Training like humans isn’t instinct it is learnt behaviour. It doesn’t “feel like” for a reason.

 

So we are arriving at the end of our journey from zero to A.I.   We now should have the confidence that we have the technology capability of building intelligent machines. But appreciate that the problem for A.I. is the problem i.e. the type of application that we can apply A.I. to.  We should also appreciate that micro steps are the answer and that applying intelligence in small steps will equal to a sum of all parts approach to A.I.  I see the next few years about seeding A.I.  Organisation need to have the awareness that the technology is possible and that we need to start sowing some seeds so when the times comes we can reap the rewards of what will be at some point in the future a rapid industry growth.  The key as always is both imagination and innovation.

 

As part of this series of posts, I’ve prepared some presentations that distill this process of thinking further and allow time for each area of this document to be questioned. I’m hoping to present this in person shortly but also video the outcome.  The fun part of this process is the fact that we are never too far away from the basics of computing and electronics and working though the journey from zero to A.I. reminds us how remarkable computing and electronics is.  To quote Steve Jobs we are truly creating a “bicycle for the mind”.

 

Next Up : The beginnings of Quantum Computing, how we can continue the evolution of the transistor and the Quantum Leap it can provide our current technology capability.

 

 

 

Should we ban the word e-commerce?

Ok so the last three years this word above any has been at the core of my day.  However I feel its probably time to make a change and not even acknowledge the word exists.  Why? because I think it creates a kind of thinking and behaviour that is disruptive to innovation.  It signals a clear vote to one of the major enterprise commerce players or that you need to look at partners that have a proven checklist of capability.  Sound familiar, big bulky, slow, time consuming old I.T. world.

But surely this goes against everything we have built and tried to achieve over the last three years?  Why would I say such a thing?

I say such a thing because its making us look at every opportunity as a platform rather than an innovation in business or service offering.  More and more I  am having conversations about new ideas for services and or business lines that don’t involve a shopping cart.  This is where the next growth opportunity is going to be.  How do I transact with my customers, my partners without a shopping cart. Do I turn all my relationships into subscriptions or do we create tools, apps or accounts into one click intelligent transactions? Are we already getting the feeling that there a more and more services out there that take our money off us with such convenience that we enjoy the pain.  Are we seeing better mobile apps that play to convenience around the transaction? Ringing any bells?  I can count 5 mobile apps that take my money off me every week with such convenience I think they have done me a favour and I’ve gone to the trouble in paying an extra 20p for the process.

What does this mean for us and the e-commerce vendors?  Well vendors you are simply not moving quick enough. For us we need to concentrate on the transaction and service.  Above all we need to think outside the box and innovate even if it means adding bespoke technology back into the mix.

Amazing stories: The Drum catches up with digital agency Amaze as part of 20/2000 visionaries series

Originally spun-out from a university research unit, creative and technical agency Amaze has been pushing the boundaries of digital communication since before the rise of the internet. In the latest of The Drum’s 20/2000 Visionaries series of features, where we mark the 25th anniversary of London digital agency Precedent by celebrating 20 top digital shops founded before 2000, The Drum finds out more about the pivotal moments in Amaze’s history. Read more….

So we’ve implemented hybris, what’s next?

hypris_logo_2009

With hybris leading out as the most widely selected enterprise e-commerce platform more and more organisations have moved to implement the platform into their business.  This job is often no simple undertaking, but with the promise of great rewards both in terms of functionality and future proof capability, it will have driven your passion and rigour to get the first instance of hybris into your business. However once the platform is there up and running, where do you turn your strategy and thinking to?

Naturally the answer should be trading, merchandising and optimisation.  This in itself is an ongoing job that will continue for the next 5 years and will involve constant testing and evolution.  However if I’m a CIO, is their a roll for me to think outside this cycle of improving sales conversion and optimising revenue.  To understand this it is essential to understand the capability of the platform and how it can play a much bigger role within your business. The following is a number of useful considerations to help you formulate your strategy;-

1) Owning the product landscape.

Put a price on your product data. During your journey to implement hybris you would have taken inevitable pain processing, restructuring, understand, designing and enhancing your product data.  You will probably have a set of clear integrations to get your data into the platform and a good set of processes to enrich that data making it a single source of truth for all product information and its associated assets.  This journey ended up in you being able to display that product information in a nice new e-commerce platform for consumers to buy from.

It, however, shouldn’t end here.  We have created this single source of truth of product data and now hybris is the owner of all your enriched product information.  You need to create tools to expose that data to other services.  Whether that’s omni channel feeds to sales apps, pos platforms, mobile apps, omni channel environment or even print channels.  The first step is to make this openly available to your business by developing a set of simple api for subsidiaries, partners or even other departments to call to consume that data.  What this gives you is control, reuse, savings, quality, consistency a single source of truth.  Get this right and you are unlocking one of the biggest benefits of hybris. Get this wrong an you will see your hard work un done with data duplication by other system as your organisation digital evolution continues.

2) Development of a master consumer view

Once hybris is up and running and you are generating sales data you will start to develop a consumer data set.  Each of your consumers will be requested to create an account within hybris.  You may have implemented a loyalty programme alongside hybris again collecting more information on the user’s buying habits or profile.  You may be using hybris’s advanced personalisation module, again collecting more profile information.  This is an important next step in putting hybris at the centre of your single consumer view.  Each hybris consumer needs a dedicated account and because that account is where money is changing hands with the consumer it is one of the most important touch points with your customer.  Hybris will therefore have more authority over other consumer data that is collected within your organisation and ultimately if you are collecting other consumer information elsewhere you are more than likely to drive those consumers back to your commerce platform.

It therefore essential that hybris should look to own the master consumer model, even though the actual data may existing elsewhere such as SAP or a leading CRM platform.  Once this principle is established you should look to extend the master consumer model beyond the basic e-commerce profile, this can be done by considering the following;-

  • Create a global consumer passport id using hybris authentication via an open standard api as as open id.
  • Extend the profile data to authenticate with leading social platform therefore allowing us to harvest their profile data.
  • Extend their profile through loyalty programmes
  • Extend their profile trading and analytics data
  • Extend their profile through third part services and apps
  • Look to create api’s into hybris to allow other services to access key consumer information and also to contribute to their existing profile

I believe that extending hybris to own first the id and then the consumer profile makes it a much easier step to then integrate the data collected back into a global CRM platform. If you where to approach this the other way round or via an independent signon technology you would be left with a much greater integration challenge.

3) Protect your CMS

hybris is an enterprise e-commerce suite, with it comes a comprehensive set of core products such as a pcm(product content management platform), a cms(content management platform, a customer services cockpit, search engine as well as the e-commerce platform itself.  With such an extensive enterprise toolset you will no doubt have challenges to it as a one platform fits all.  Most of the challenge to hybris will come from the traditional content management vendors who themselves have expanded their customer engagement capability as well as their ease of use and agility in handling campaign based content. The two can naturally complement each other with one focussing on customer acquisition and the other focussed on conversion into sales.  However they do not naturally sit well together from an architecture perspective.  For example you cannot unpick a commerce platform so that it is wrapped via a content management platform with the cms platform handling the entire presentation layer.  If you do this you risk breaking the roadmap of each product and cutting out most of the advantage of the e-comm platform for marginal gains in functionality.  There is however a case where the two can exist as long as the boundaries are clearly marked and more importantly respected. So how is this done?

  • Hybris must take the lead, it is the core platform and engine behind your commerce site.  It therefore must take the lead in delivering all content and presentation for the commerce journey, product display, navigation and search. Breaking this will fundamentally undermine your architecture.
  • The content management journey must focus on the engagement of the consumer, whether higher up the conversion funnel via campaign activities.
  • Architecture of the two platforms needs to be clearly planned in advance.  Reliance on apis will cut out the value of both technologies.  Whilst apis do existing in both technology sets you cannot fall into the trap of assuming that they will deliver all elements of functionality that the product suites deliver.  They are there to expose data only not whole suites of technologies such as checkout processes and merchandising capability. Ignoring this will see expensive and constant cost with you initially building swathes of functionality and then later maintaining it to keep it in sync with both products roadmaps.

4) Be wise with mobile

Mobile is overtaking desktop browser environments.  hybris has traditionally been fragmented with accelerators having a clear split between mobile and desktop versions.  More up to date implementations of hybris have led to the partner community adapting one or more of the accelerators and making them more responsive templates.  However this may still lead to older implementation of hybris requiring the need to revisit the front end templates. Our view is that can be done efficiently and not require a major rebuild of the front end.  At Amaze we have utilised the Jeet Grid System into our templates which works fairly effortless with hybris.  Other approaches include Bootstrap and the Foundation framework.  Where to start this process is key,  we do not favour looking to redesign existing templates, we only look to design the degrade options for smaller screen sizes. We then work with existing css elements to ensure they will interoperate with the framework. Our recommendation around mobile is not to open a can of worms which will involve a complete redesign. Work with existing accelerators and your chosen responsive framework as a starting point, do not go back to the drawing board or photoshop as it will lead to an expensive redesign of all templates.

If you are starting out avoid the mobile accelerators for the time being.

5) Reporting

Reporting will be an ongoing theme for you as you get to grips with your data and trading activity.  Reporting will always include a mixture of services including hybris, ERP and your analytics platform.  You will however find all of these services fairly static and as you start to digest your data you will want to dynamically cut and dice that data.  This is where data reporting such as Business Objects, Tableau and Mixed Panel come into the equation.  However when you start to consider such projects you need to understand your data.  In order to do this we recommend grouping your data into pools and investing in the export mechanisms to get the data out of each platform. Again architecture and approaches need to be carefully thought through here as there is a place for an abundance of technology to process your data.  So we recommend not trying to do this adhoc look to create a service to pool all your data flow and maybe combine that with a central data warehouse.

6) Complementary Products

Once hybris has been implemented we need to keep a close watch on the vendor landscape.  There are lots of complementary services that are coming online that enhance hybris capability.  These are either hybris partner product or wider services from the e-commerce market place.  The early winners in this space are the A&B Testing toolsets that look to optimise trading content and merchandising.  Optimizely is a winner in this category and can work well with hybris.  But there are so much more that can benefit your implementations.  My recommendation is to talk to us first, we can demonstrate some up and coming tools and services that may benefit you.

7) Advanced personalisation

Following on from our master consumer view, a natural progression with hybris is through the advanced personalisation module.  This cannot be taken on until you have a clear strategy around your master consumer model.  However once you have an answer to this then hybris advance personalisation module can aid;-

  • An increase in your average order size by collection customer information from all sources, comparing it to adaptable targeting rules, and providing a personalised shopping experience.
  • The definition of meaningful customer segments and dynamically assigning customers to those segments based on online behaviour.
  • Support for behavioural targetting across multiple channels including online, offline and mobile.
  • Monitoring of outputs from rules to assess results and gain insight into customers and their online behaviou to adjust the product mix and develop effective marketing campaigns.

Again looking at the capability of this module within hybris can further underpin the necessity to keep hybris at the core of your digital estate rather than diluting it via competing cms technology, because personalisation can only truly be achieved via the platform that controls the product and pricing data which in all case is your commerce platform.

9) Better search

Hybris utilises the Apache SOLR search engine.  It is fully integrated within hybris and provides a rich set of search and navigation capability. However it can be improved through extension and customisation or via considering other complementary technologies. Enter SDL’s Fredhopper search platform.  The strength of this technology, combined with hybris is its ease of use for merchandisers.  It is a technology for consideration but only with close respect to the overall technology architecture and not to replace the hybris presentation layer.

10) Finally revisit the full specification capability of hybris.  T

There’s some great documents in hybris’s wiki.  You will find functionality and capability that you did not even know existed.  Ask Amaze to show you something new.

HOW BUSINESSES ACHIEVE GLOBAL SUCCESS.

We have released our latest whitepaper, ‘Digital commerce – how businesses achieve global success’; a best practice guide which identifies how businesses can harness the power of global eCommerce solutions.

With online sales predicted to reach $1.6 trillion by 2018, the white paper examines the growing adoption of connected devices and the current state of the changing commerce landscape and identifies the seven critical steps for global success in a competitive marketplace.

The seven steps to success are summarised as follows:

  1. Getting the financials right – Solid financial planning is key and it is crucial to allocate enough time and budget before commencing a project of this kind. Organisations need to build longer than anticipated timescales and recognise that a replatforming roll out will be competing against other resources.
  2. Choosing the right technology and architecture – The complex technical integration needed for a global digital commerce solution must be respected right from the start of a project. The architecture design needs to be able to integrate with ‘best of breed’ technologies, it needs to be agile and a solution that can be quickly and easily rolled out across the different regions.
  3. Putting the right people in place – A digital commerce solution is only as good as the people behind it and building strong team chemistry is key and the gel that will bring everything together. The strength of the team really is the difference between success and failure.
  4. Data readiness is key – Getting product data right from day one, including an understanding of how a product is categorised and searched for, needs to be the core foundation to a solution. It is also important to start planning dataware house and intelligence dashboards to capture trading data.
  5. The importance of global governance – Every global digital commerce solution needs a visionary to head the team and push the boundaries. This champion needs to drive the momentum of the project and maintain the pace of global roll out so delivery takes place as scheduled.
  6. Global solutions are the future – The concept of a single global solution may seem daunting at first but one platform means synergies and shared costs, as well as the shared benefits of collectively improving solutions. Digital commerce has the power to transform business processes, bringing real cost benefits throughout the entire commerce lifecycle.
  7. Continuous optimisation – Employing a strategy of continuous optimisation is essential to ensure that progressive enhancements are made to a solution and for delivering added flexibility. An agile solution will allow processes to be quickly improved in line with changing business needs for the long-term.

Commenting on the white paper’s insights, Matt Clarke, Chief Technology Officer at Amaze, said:

“While it is encouraging to see some organisations standing on the threshold of digital commerce, others are still unsure of how and where to begin. Organisations need to embrace this new era of digital commerce, as real growth and success will only come with an ongoing and accurate understanding of the changing needs of consumers. For those organisations that do make this leap, by following these seven critical steps, they are set to expect real rewards in the long-term.”

Download the full white paper here.