Blog

Cloud computing? It’s become a catch-all phrase that means different things to different people.

13 October 2015 Cloud

Alf Tornatore, Director Global Sales, CloudMed discusses how next-generation healthcare management Clarity uses the Cloud in a wholly different way.

Cloud computing has become a bit of a cliché. Although many products claim to be Cloud enabled, this is a very loose definition and there are real differences in the way they are designed, the way they perform and the way they use the Cloud. The subject is poorly understood, even within the industry.

In Clarity, we’ve spent the past four years developing a healthcare management system that uses the Cloud in a completely different way.

To understand what this means, rather than talk about abstract concepts, it helps to look at the way computing has developed over the past 40 years.

In the beginning there was the mainframe

In the early days, back in the 1970s, computers were rarefied expensive things and there was a need to maximise their usage and share them as widely as possible. Mindful of the benefits of economies of scale, companies like IBM and DEC produced mainframe computers with time-slicing operating systems that allowed many people to share a single computer, reducing the cost per user.

These systems had a large central processing unit and ‘dumb terminals’ that had no processing power. They were simply passive screens that displayed the user interface – rather like TV screens.   Lots of people were able to share the one machine via these passive screens.

With the birth of the PC there came a need for networking

Then come the 1980s, just as video killed the radio star, the rise of the personal computer (PC) brought the mainframe era to an end for general applications.

By the 1990s, there was a huge demand for networking PCs and the computer industry set about looking at ways to achieve this affordably and easily.  They returned to the old-style method with passive screens ­ – dumb terminals – now known as ‘thin clients’. A case of history repeating itself.

Companies such as Citrix produced software that could host a number of concurrent users on a central server by essentially fooling the server software into thinking it was running on a conventional PC.

The great advantage of this was that it was an affordable way for users to network their desktop PCs. They could extend the life of their existing applications with minimal redevelopment.

However, this approach relied on dividing up the server and turning the desktop PC into a passive screen – it received a signal and displayed it just like the ‘dumb terminals’ of the past.

But conventional PC networks have their limitations…

The significant issue with this method is that graphical user interfaces like Windows are very processor intensive with around 70 per cent of the system’s processing power devoted to merely running the user interface.  When you sit at your PC, move windows around and click on menus, it feels responsive because the system is producing an animation, something like a video stream, running at around 30-60 frames per second.  This is requires a lot of processing power.

With all of the processing requirements being handled by the central server problems arise when the network needs to increase in size. Imagine the huge drain on bandwidth and resources when all this data is coming off a central server with hundreds of users.

Let’s say we want to double the number of users on a network from 20 to 40.  As each additional computer (thin client) is added to the network, each additional user essentially requires their own PC to be simulated within the central server. So now the server is doing the work of 40 computers in one box, draining resources.

… and adding more PCs is a burden on the system

Each additional user is yet another burden on the system, bogging it down further each time. Responsiveness drops off precipitously until eventually it reaches the point where the whole system becomes very laggy and unstable. This approach to networking can result in massive bottle-necks and places an upper limit on bandwidth and/or affordability.  Basically, what you have is a rickety house of cards and sooner or later a card is added and the whole stack falls over – not just the top card.

If you have a problem with your desktop PC at home you’ll likely have to reboot which means you may lose the last 10 or 20 minutes of your work.  If you have a problem with a desktop PC on a server running a network of 100 desktops, then very many people will lose their work.  To pre-empt these potential failures, back-up mechanisms are built into the system but there is only just so much that can be done. Compromises have to be made.

Taking a Web Services 1 approach to networking Clarity…

In 1999, our team was first to market with Australia’s first internet-enabled medical application: Monet (Medicine on the Net).  But we recognised the limitations of the old way of networking.

So in 2011, after a great deal of thought and planning, we began to develop Clarity.

Clarity is built around Web Services. The user interface component of the application is loaded on the desktop PC. This is ‘Clarity Client’ and it can be installed remotely and updated automatically.

Its sole job is to display the user interface and respond quickly to user inputs. With this component loaded on the desktop PCs, the burden on the central server is reduced by 70 percent. In turn, the central server has the business logic component of the application installed – representing about 30 per cent of processing power.

… means that adding PCs delivers more processing power

Using the same scenario as above, increasing the number of users from 20 to 40 doubles the available processing power. This is because each desktop PC operates autonomously, with each additional desktop PC having an additive effect on the available pool of processing power.

In this approach, rather than streaming a processor and bandwith-hungry video signal from the central server, Clarity sends relatively tiny packets of data back and forth. This places an order of magnitude less burden on the server. Of course, performance will drop off eventually, but the curve is far more gradual and the threshold much further away, compared to the old approach where performance falls away quickly before ‘dropping off a cliff’.

This is a key differentiator.   Clarity has the ability to handle hundreds or even thousands of users simultaneously which isn’t possible using the old approach. Clarity is truly ‘next-generation’.

For Clarity, the Cloud is not just a ‘big hard disk in the sky’ but rather a collaborative distributed network.  It has been designed to respond to the changing nature of healthcare delivery.  As larger networks of primary practices, day clinics and specialist care centres become the standard, requirements and expectations are changing.  High-end performance, scalability and reliability are the key issues impacting clinicians and healthcare managers on a daily basis. Clarity is designed to meet them.


1 A software system designed to enable machine-to-machine interaction over a network.