« Enterprise 3rd Generation | Main | WSJ Article »

May 09, 2007

Comments

Bruno

Hi Shai,

good stuff. For the "people axis" i think we'll see a completely new fragmentation of skills and roles alongside that blocks, especially in the upper area:

- Low End: People who use services like del.icio.us, Freebase etc. and want direct, "command line" style access to their systems.

- Middle: People who don't want to deal with the inner workings but customize their workflow using things like Visual Composer or Teqlo (http://www.teqlo.com )that harmonize and aggregate (currently the Excel Macros afficiandos).

- Upper: The users that use that cool new app/widget from their coworker to get their job done in a better way.

All the necessary tools are in place or just around the corner (shameless self-plug: We (http://www.systemone.at/screencast/eng ) use S3 as storage for larger files, an in-memory triple store on appliances for metadata, ec2 and cheap redundant European boxes for various application components like a 3 TB news clipping index, but no matter if you use the web, mobile and soon desktop client you don't notice that you're in fact dealing with a secure system spanning dozens of locations).

But the most important question was also raised by Armando: What does it mean that we have to start taking "machines" serious as stakeholders as they start to become an active part in the whole ecosystem? What does it mean for businesses, when a company (http://infoproc.blogspot.com/2005/11/simons-thorp-and-shannon.html ) is successful, that on some days single handedly is responsible for a double-digit percentage of the whole trading volume on some global financial exchanges, and at the same time is proud that no human being makes a buy/sell decision? If it can be done for such complex system, what could the role in SCM, Marketing / Demand Prediction etc. be? If you start imagining what BMW would look like if it was run like Renaissance Technologies and use your model as foundation, with people intervening at some new and select points, that results in an amazing load of new thoughts.

It's great to read you, best,

Bruno

Anshu  Sharma

I have a slightly different take on how to divide up the problem space. I see computing problems dividing into two kinds:

- Continuous Computing
- Ad-hoc Computing

Allow me to explain. Continuous computing is a set of functions that need to be performed on an on-going basis. This includes: data routing, business object routing, search indexing, RFID & sensor tracking, etc. Continuos computing will be spread out over the network (done in the 'dispersed cloud') by computers (chips, storage, etc.) that are more or less autonomous. And this type of computing, although largely invisible to end users except as it impacts them indirectly will be a large fraction of computing.

Ad-hoc computing where we look to solve a problem in smaller time window- an obvious example being pulling up a BI report on last 3 quarters revenue data- will require a more concentrated computation experience(or perceived single computer). This includes search, email, business applications like SCM, ERP etc. In this domain, the E3G model would perhaps apply better.

But I do think that in order to come up with the 3G of computing, we need to think of both these classes of computing.

My 29 cents.


Rick Bullotta

I couldn't help but smile when I saw your chart above. A few years ago, I was trying to explain to a standards committee that application boundaries weren't as clear and consistent as they assumed. The diagram I used was very similar to yours, but was actually the top of Bart Simpson's hair. It came to be known as the "Bart Simpson diagram". ;-)

One other possible outgrowth of the "loosely coupled" and "ad-hoc" model is that applications will need to become more resilient. In many programming paradigms, exception handling code encompasses about 60-80% of the logic. Part of the empowerment will dictate that more of this be transparent and handled by the infrastructure and platform, rather than the application.

Extending Bruno's analysis of the "people axis" - it is somewhat like the layering of developers who write device drivers, vs those who write OS's, vs development tools, vs applications, vs reports, and so on.

The skills and "mission criticality" change dramatically as you move up and down that stack.

In our 3G world, the other skill segmentation that I hope we'll see is that of the "service authors" - whether they're data services or functional services, there will be a distinct "art" and knowledge base needed to design for consumability.

Party on.

SS

Shai,
It took me a day to assimilate what you were talking about. My ideas on these planes are ill-formed as well. But let me take a shot at responding to your post, hopefully in a meaningful way.

The 'fit-to-need' concept is definitely an interesting idea. Yes it is where we are going. But I also believe that assuming that a blue-print for a 'fit-to-need' architecture will come to be some day, would be fundamentally wrong. I've made that error myself. I think there will be as many 'fits' as there are viable markets for 'fitting' products.

So, now if we assume that ‘fit-to-need’ is where the market is going, how do we tap into this opportunity? I think a logical first step is to take a step back and investigate how the nature of information systems have evolved (or are changing) from E2 to E3.

E3 information systems are characterized by a number of unique traits that have never been encountered before. The challenges and opportunities being created today by the Web 2.0 community address the opportunities created as a consequence of these traits. I can at least point to the following (incomplete) set of traits that characterize these new systems:
* Adaptive Architectures - As component with myriad characteristics become available on the internet and cost of integration with/consuming these components fall, architectures that are, as you call it, 'fit-to-need' become meaningful.

* Migratory Infosets - As the representation of information becomes self-describing and complete, they will increasingly have a tendency to replicate and migrate. In sum, data will have a life of its own.

* Virtual or Discontinuous Processes - As internal process become exposed through the internet via standard interfaces, process boundaries become meaningless. The notion of the intranet-extranet-internet, will give way to a ‘ubiquitous net’ or ubinet .

* Participatory User-experience - As the access to information becomes ‘free-for-all’ the user experience becomes co-creative, collective and collaborative. In sum, the user experience is democratized.

I don’t think the list of traits here are in any way complete. So I urge the readers to contribute their 10c or critique on these….

A next step would to identify gaps/limitations/hurdles in the existing systems that prevent these traits, and find new ways to innovate around these challenges. What would also be useful, is a blueprint of the ecosystem of the E3 market that can possibly exist in the future.

... some interesting ideas evolving here. Keep the blog going guys!
--
SS

Rick Bullotta

One other macrotrend that we may see having a substantial effect on this computing evolution is the so-called "internet of things" combined with extending existing apps with richer real-world awareness.

Whether it is a manufacturing planning module with live status of machines, your coffee maker automatically reordering pods, your vehicle interacting with your PDA/planner and the dealer to schedule service, or longer term, autonomous robots unburdening us of some of the realities in the atom world vs the information world (driving our cars, fighting our battles, running our factories), there are countless opportunities for enabling/extending people and applications with enhanced sensory and processing capabilities, real or virtual.

These are not new concepts by any stretch, but their omnipresence in our day-to-day lives will be transformational in ways we cannot yet imagine.

A brave new world, indeed...

Anshu  Sharma

Rick: Agree with you- the internet of things in my model would fall under "continuous computing", and would by its very nature be dispersed. In fact, sensor-based computing would include not just internet of things but also internet of 'observations' like temperature, humidity, etc. So the cloud can be queried for not only things and their properties but also their environment and location.

The comments to this entry are closed.