Posted by: mcgratha | May 15, 2011

Why most documents don’t emigrate

Most of the Document and Records Management (EDRMS) projects that I’ve worked on, or have had visibility to, have all worked on the premise that they needed to migrate their old documents from legacy systems (usually shared network folders) on to the new EDRMS. This is not surprising considering the issues and challenges that most organisations have with managing their business documents in shared network folders (covered in my previous post).

However, despite all the good intentions to migrate, when it came to it, most of these organisations actually performed very little document migration. Perhaps it was when the realisation hit home about everything that needed to be done to get those documents migrated and the challenges, effort and cost involved, that they questioned if there really was a business case for doing it? In the end, most only migrated the minimum number of documents.

The analogy is like documents attempting to emigrate into “ECMland” from their shared folder homeland, but most of them being turned away at the border. Sorry, you are not coming in, you’ve not gained enough “entry points”, you are simply too much hassle for me to process (see key challenges discussed below).

Document Arrivals

There are, of course, many organisations that have performed very large document migrations, and continue to do so. However, there are also many organisations that shy away from document migration and this post reflects my observations as to why this might be so.

Key challenges to document migration

Marc Fresko has written a good paper on the what, why, how, who, when and where of document migration. Focusing in on the “what”, the starting point of any document migration exercise, this is usually done in the form of an “as is” inventory or information audit whose purpose is to build a picture of the type, usage, owner and volume of documents across the shared folders. Many EDRMS vendors and third parties (such as www.foldersizes.com) provide tools to assist in this process by crawling and analysing the shared folders, reporting on what they find, identifying potential duplicates and recommending what documents could potentially be cleansed (retired / deleted / moved elsewhere).

Whilst there is certainly some manual review and validation work required to get that overall “what documents do we have” perspective, this falls well short in comparison to the level of manual work required in order to identity and make a decision on what documents to actually migrate and preparing them for the migration.

For example, some of the common challenges include:

  • Version Control – Finding different versions of the same document scattered across shared folders, idenfitying which is the latest version and configuring the migration tool to upload the documents in the right order (so as all the versions are uploaded in sequence);
  • Metadata – Documents need to be classified and tagged with appropriate metadata on upload into the EDMRS. This can involve significant manual work although it is usually possible to semi-automate aspects of this process. For example, a number of techniques can be used such as inferring metadata from the folder structure where a document is stored, inferring metadata based on file name conventions used, by extracting property fields from Office documents (and mapping them to metadata fields) and by using classification tools to analyse the contents of the document and assigning a “best-guess” value for metadata;
  • Security and Access Control – Rules can be setup to assign appropriate security and access control to documents on upload into the EDMRS. However, it can be dangerous to do this unilaterally and will therefore require some manual intervention;
  • Relationships and Dependencies – Many documents may have embedded links within them to other documents on shared folders. Once the documents are migrated, these links will no longer work unless the migration tool is clever enough to automatically re-link them to the associated (and also migrated) documents in the EDRMS.

Although good document migration tools can significantly ease the burden of migration, the overall resource effort required to analyse, design and implement the migration can still be quite significant. The work also requires a lot of “air-time” with the business owners of the documents, unfortunately often the type of people who have little free time to give, which can add further complications and delays to the migration process.

Reality check

Faced with the challenges, effort and costs involved in performing a large-scale document migration, many organisations decide to scale back on the scope and number of documents to migrate.

The rationale is that it isn’t worth it when you consider that, for most organisations, only a small subset of documents, usually the most recent, are accessed on a regular basis. This concept is illustrated in the diagram below.

Long tail of document accessAn approach often taken

As a compromise to a full migration, the approach that I’ve often seen taken involves migrating frequently used / important documents up-front, leaving the remainder in the shared folders and progressively migrating them on demand (as and when required).

Although less intensive on manpower, this approach does come with some complications:

  • There may be people and process issues around the co-existence of migrated documents in the EDRMS and non-migrated documents still in the shared folders. This needs to be mitigated by careful end-user training with clear operational guidance;
  • What happens to the shared folders? Most organisations tend to make the (designated) shared folders read-only but indexed by a search engine, with the goal that all new business documents are stored and managed in the EDRMS and with a longer term view to decommission the shared folders.

Out of interest, I know of some organisations that decided not to migrate any documents, starting completely fresh in their new EDRMS.

I would welcome your feedback on what I’ve written in this post and the approach that you, or your customer, has taken regards document migration. Arlene Spence provides an interesting discussion on the handling of shared drives in her Scrubbing Content series of posts.

Posted by: mcgratha | April 24, 2011

The Problem with Shared Network Folders

Despite the year-on-year increased uptake of ECM systems, Shared Folders (also called shared network drives) are still widely used by organisations to store important business related documents.

The thing is that they are really quick, easy and convenient to use, which is why so many users continue to use them. However, they also impose a number of limitations in terms of how documents are organised, managed and shared.

This post outlines these limitations and further details the resulting points of impact that I’ve observed over the years across many client projects.

I am writing this post as the challenges of managing documents scattered across Shared Folders comes up in almost every ECM project that I work on and I am curious to understand what other people’s perceptions are of these challenges.

Key limitations of Shared Folders

Shared Folders lack much of the essential functionality that is required for organisations to effectively manage and leverage their business related documentation. For example, Shared Folders have no (or limited) capability to:

  • Assign metadata to classify and tag documents that would enable those documents to be more effectively found;
  • Exercise any formal version control over documents (specifically for Windows operating systems);
  • Record an effective audit trail of who has accessed or changed a document;
  • Enable documents to be flexibly and easily secured at a granular level;
  • Enforce a consistent folder structure;
  • Facilitate publishing/distribution of documents from a single source to multiple channels;
  • Enable users to be notified when certain types of documents are created or changed;
  • Restrict duplication of documents;
  • Allow documents to be easily linked and cross referenced.

Impact

In my view, the pros of Shared Folders are outweighed by the cons as the functionality limitations of Shared Folders unfortunately impose a number of issues and challenges around the management of business related documents. Here is my take on what I think these issues are:

Shared Folder problems

Considering these issues in more detail:

  • Duplication of documents and confusion as to what the latest version is – The widespread use of Shared Folders inevitably leads to significant duplication of documents across the organisation, with the same documents being stored many times, by different people in different folders. This means that it can become difficult to tell if an existing copy of a document is the latest or final version leading to confusion as to where the ‘single version of the truth’ lies and who the owner of the document is. The result of such duplication is that people often access and read the wrong version or copy of a document, make decisions based on the wrong information and potentially release the wrong information which could have damaging effects in terms of costs and company reputation;
  • Complex file and folder naming conventions – Lack of version control and the inability to associate metadata with documents means that complex/unwieldy file and folder naming conventions often need to be used. For example, <department code>_<project code>_<descriptive name><date><version>;
  • Lack of consistent folder structures – Although Shared Folders can be organised and structured in a consistent manner, over time, without strict procedures and controls in place, the folder structures tend to develop inconsistencies and become less manageable. This means that finding documents becomes more difficult, often necessitating people to re-create documents (or extracts of content) from scratch;
  • Redundant documents – It isn’t practical to apply expiry dates to documents on Shared Folders. As a result, the volume of documents can become unnecessarily large (increasing storage costs and making finding documents more difficult) as many documents are retained that are no longer used and that should really be archived or deleted;
  • Ineffective search – Having a mixture of Shared Folders for storing and accessing documents without having a means to centrally index and manage these documents means that search and retrieval of documents can be extremely time consuming and often ineffectual. Searching for documents is limited to a full-text search (as there is no metadata available on documents to intelligently filter down the search) and also searches against old versions of documents (not necessarily something that you would always want to happen);
  • Inaccessibility of information
    • Difficulty in sharing information – There is no easy way to share documents with individuals in other departments/divisions as they often cannot access each other’s Shared Folders. This means that finding cross department/division information can be very time consuming;
    • Information lockdown – Utilising Shared Folders means that employing proper security policies and access permissions to sensitive information is hindered, resulting in a lockdown approach that limits the ability of users to locate and leverage required information without going through other channels, reinforcing silo working practices;
    • External Access – No external and mobile access capability, necessitating the manual transfer to a web site (e.g. Extranet) of any documents that need to be shared externally;
  • Lack of subscription and notification - Inability to push information out to people automatically to notify them of changes to specific documents or types of documents;
  • Limited ability to synchronise documents offline – Typically, when people need to work on documents away from the office, they copy them from the Shared Folder on to their laptop. However, there is no way to ‘lock’ a document to prevent others from changing it on the shared area or any automatic means of checking for document change conflicts when copying the document back on to the Shared Folder again. It is not practical to synchronise documents offline which means that there is a chance of mistakes occurring and document changes being over-written;
  • Inability to cross reference and relate documents – Using Shared Folders makes it tedious and difficult to create and maintain relationships between documents. For example, if looking at a document about a ‘product’ in one folder, it might be useful to relate that document to other documents concerning the same product that are perhaps stored in different folders. The way that this is often done is by inserting a hyperlink into the original document that points at the related document. However if the related document is moved or its name changed, then the hyperlink (and hence, the relationship) is broken;
  • Lack of document governance and control – When Shared Folders are used, it is not possible to track and report on how documents are created, routed, changed, approved and distributed, both internally and with external bodies. Instead the associated processes tend to be ad-hoc in nature with no audit trail, very little control and governance and involving too much manual activity, allowing many people doing things differently and increasing the risk of more things going wrong;
  • Compliance, Risk & Legal Admissibility – The use of Shared Folders creates many difficulties in achieving compliance with regulatory mandates due to the requisite information residing in disparate repositories and not having a means to prove the authenticity and integrity of the information. For example:
    • Handling Freedom of Information Act Requests – In the UK, responding to Freedom of Information (FOI) Act requests can be very time-consuming and costly, especially if the required information is scattered across Shared Folders;
    • Legal admissibility – Scanned images of documents that are stored in Shared Folders will not be legally admissible in a UK court as they don’t meet the BIP0008 standard (UK code of practice for legal admissibility and evidential weight of information stored electronically);
  • Impeded collaboration – Shared Folders have very limited capability to support collaborative team working;
  • Storage and maintenance costs – Growing volumes of documents across Shared Folders, with large amounts of duplication and versions, results in higher costs for storage, backup and maintenance. Although it may be argued that the cost of storage of electronic documents isn’t necessarily prohibitive, it is the management, searching and effective use of these documents where all of the greatest costs and associated inefficiencies lie. Hence, throwing more storage at the problem doesn’t really help.

Conclusion

Shared Folders are easy and convenient to use, but come with an ‘information health warning’.

You might think that with all of the challenges outlined above that most organisations would be clammering to get their documents out of Shared Folders and into ECM systems. However, that doesn’t seem to be happening and I will explore why in my next post.

Posted by: mcgratha | April 8, 2011

The impact of social media on ECM

I presented at Intellect yesterday on the Impact of Social Media on Enterprise Content Management (ECM).

I’ve been working in the ECM industy now for about 11 years and I think that Social Media, Content Management Interoperability Services (CMIS) and the related mobile apps marketplace represent some of the biggest catalysts for change that I’ve seen in the ECM industry.

I’ve uploaded my presentation to Slideshare where it can be found and downloaded here.

Other presenters at Intellect yesterday were Thomas Power (chairman, Ecademy), Jonathan Beardsley (Senior Solutions Consultant, Open Text) and Malcolm Beach (Senior Consultant, In-Form Consult).

Posted by: mcgratha | March 18, 2011

The virtues of a Model Office

This post discusses how a ‘Model Office’ approach to requirements capture, design, build and test can play a significant role in improving the success of an Electronic Document and Records Management System (EDRMS) implementation. As always, I would very much welcome any comments, observations, feedback.

What’s up Doc?

waterfallThe success of any EDRMS implementation is heavily dependent on the up-take of the solution and buy-in from the user base. However, I think that this success is too often put at risk when implementing EDRMS solutions based on the more traditional ‘Waterfall’ approach which performs the requirements capture, design, build, test and deployment in discrete and largely independent stages.

The problem with the Waterfall approach stems at its source, the requirements capture. For many organisations, a large proportion of its user base will have little or no previous experience of using an EDRMS solution. Even for those users that might consider themselves reasonably knowledgeable of what EDRMS is all about, they are unlikely to be familiar with the latest developments in EDRMS technology, best practices and process.

As such, when it comes to requirements capture, there is a fundamental barrier to overcome in that most users don’t know what they don’t know. Therefore, how can we expect them to provide set of requirements that will form an accurate basis for the subsequent design, build, test and deployment? I don’t think we can, there is too much room for misinterpretation. Their expectations for what they think the solution will provide them might be very different to what we capture as requirements. And it is a bit late to find this out several months down the line when they get sight of the solution for the first time. A humorous analogy that alludes to this problem was written by Lee Dallas in his post WikiLeaks, Expectations and ECM.

In my view, the best way to overcome this barrier is to give users the opportunity to visualise the technology in action and give them time, with the help from experts, to understand how they can best use the technology to meet their specific business needs. Let the requirements naturally evolve based on true understanding of the technology and the impact of introducing it on people and existing processes.

This is the approach that the Model Office takes, overcoming many of the shortfalls of a Waterfall approach, and is the subject of this blog post.

The 2 Franks – my way and the right way

Achieving a high take-up percentage of users for a new EDRMS solution means addressing and conveying the value that the new solution will bring them at both a personal level (“what’s in it for me?”) as well at an organisational level.

The best way to do this is to get users that represent different parts of the organisation involved at the outset of the EDRMS programme of work such that the solution is designed and shaped by the people who will use it.

If this is not done then the chances are that the solution won’t be representative of what the users want, and many will find ways to passively resist using it, doing things their own ‘Frank Sinatra, my way’. It is so much better to catch potential issues, mistakes or ambiguities around requirements as early as possible in the programme where they can be resolved before the build commences. From an architecture perspective, Frank Lloyd Wright once spoke about it being easier to use an eraser on the drawing board than a sledgehammer on the construction site.

This is where the Model Office comes in.

Two Franks

What is a Model Office?

The Model Office represents a controlled ‘laboratory’ environment (one that doesn’t impact the operational business) that facilitates a rapid, agile and highly effective means to capture and agree requirements with key users across different parts of the organisation. It is an ideal means to refine the solution to meet the specific needs of the users that will be using it, allowing them to visualise how the solution can be designed to meet their requirements and new ways of working, and what the ‘wow factors’ and ‘quick wins’ might be to tip the balance in favour of wider user adoption. Importantly, it also identifies the requirements that really matter to end-users (rather than all requirements being mandatory) and ensures that the requirements stick to the overall business vision and objectives, avoiding going ‘off-piste’.

The approach follows an iterative loop around requirements capture, design, and build until the solution reaches a completion state (within the time-box parameters) that end-users are happy with. This gives users a sense of ownership for the solution with the opportunity to shape how it is built and rolled out. It additionally allows users to get a firm grip on the business, people and technology aspects of the overall solution. This is essential as technology plays the minor role in an EDRMS implementation; the major role being business change.

The Model Office is focused on producing a standard implementation blueprint that can be rolled out, with localised configuration, across the organisation. Processes, tools and material for business change activities such as training, migration, and new operational procedures can all be trialled during the Model Office.

By its very nature, a Model Office enables more of an ‘agile’ approach (i.e. incremental delivery of value over shorter iterations) to be taken to requirements capture, design and build. This gives it significant advantages over a traditional waterfall approach

model office

Benefits

From an outcomes perspective, the key benefits from a Model Office tend to fall into two categories:

  • De-risking the project, giving it a much greater probability of being a success:
    • Allows users to see what works and what doesn’t work, with constant feedback from users enabling issues to be caught and addressed as early in the process as possible.
    • Users understand why they need it, how they should use it, feel ownership for it and buy into it;
    • More rounded solution based on addressing practical requirements and built in line with key business vision and objectives;
    • Business change aspects explored, refined and agreed prior to wider roll-out;
    • Allows training of super-users to be grounded on practical experience gained from working on the development of the Model Office;
    • Enables a comprehensive evaluation of the product, design and most business change activities without impacting the operational business;
    • Identifies selection of best pilot area(s) to take on solution in operational capacity;
    • Forges a bridge between IT/IS and the business as the Model Office will require staff from IT/IS and the business to work closely together;
  • Shorter overall programme timescales:
    • The Model Office will typically yield a 5%-10% reduction in overall implementation timescales;
    • User Acceptance Testing (UAT) is fast-tracked as it is often just a formality as key users will have been involved in the design and build right the way through the Model Office;
    • In general, a Model Office should result in far fewer issues and encounter far less resistance to change during roll-out, all resulting in shorter implementation timescales.

There are, of course, some implications to be aware of when adopting a Model Office approach. For example, it will require key users representing different parts of the organisation to have availability to proactively participate in the Model Office, potentially away from their desks with some disruption to their daily activities. It will also require IT/IS support, infrastructure, space and facilities.

Nevertheless, I firmly believe that the benefits of reducing project risk and shortening the overall timescales that the Model Office provides will significantly outweigh any implications.

Transition to future office

After deployment, the Model Office could then transition into a Future Office to examine how the solution can further evolve and be enhanced going forwards. For example, it could be used to explore the wish-list of requirements that were not feasible to be included in the initial pilot and roll-out waves. In most organisations, the functionality provided within the solution during the initial roll-out is typically kept fairly simple (without all of the ‘bells and whistles’) as there will be enough challenges in rolling out the solution without trying to cater for every requirement. The Future Office could be used to test new ideas around technology, process and people. Analogous to the ‘Ideal Home Show’ that is run annually in the UK, the Model Office could represent an ‘Ideal EDRMS Show’ except that it runs every day.

Posted by: mcgratha | February 18, 2011

There is an ECM App for that

I foresee significant changes coming on the horizon around how we build ECM solutions. These changes will start bubbling up over the next 2 years and will accelerate in years 3-5, at which point how we build and assemble ECM solutions will be very different to how we currently do it.

I think that the catalyst behind these changes will be the ‘Apps’ concept, specifically Apps created for ECM and more than likely based on the Content Management Interoperability Services (CMIS) standard.

I initially raised this idea in my Spawning of a new construction boom post. In this follow-on post, I further explore how the ECM Apps concept might play out and consider its implications.

The Mobile App Market

The growth in the number of apps developed and downloaded in the mobile market (primarily Apple and Android) has been truly extraordinary. The Apple App store has been live for just over 2½ years and there are already over 350,000 apps available, which between them have been downloaded over 10 billion times.

Mobile Apps Market

The combined effort behind the development of these apps in such a short period of time is quite staggering. No one company (including Apple) could ever dream of achieving such a result. And the great thing is that all of these apps can plug and play into your iPhone (and most of them into your iTouch and iPad).

What I find intriguing to consider is if a similar pattern behind the development of the mobile app market could be replicated in the ECM market (but with obviously less numbers). Then consider it going one step further if the same ECM app could run on multiple ECM products, the equivalent of the same mobile app running on an Apple iPhone, Android phone,  Blackberry and Windows Phone (rather than it being developed separately for each mobile platform). If this could happen, then wow, there is going to be one hell of a shake-up in how ECM solutions are built and assembled.

The ECM App Market

There is already a large and thriving developer community across most of the leading ECM vendors, from the larger mainstream ECM vendors such as Open Text, Oracle, EMC and IBM to the open source vendors such as Alfresco and Drupal. Developers in these communities actively develop and share (and sell) code snippets, components, modules and applications that are designed to work with their ECM of choice. The obvious limitation here is that the software developed can only be deployed on to the ECM system that it was designed for.

However, CMIS introduces the possibility of developing software for ECM once and reusing it many times. As such, it is not unreasonable to think that a logical next step across the ECM development communities would be to gradually transition their software to support CMIS and then upload it into a wider ECM marketplace. This idea is illustrated in the diagram below.

ECM Apps impact

There would need to be a critical mass of ECM apps available in the marketplace to create a compelling reason for organisations to alter how they build and assemble their ECM solutions. This critical mass might not come about until the next major version of CMIS is released (couple of years?). However, when it happens then we could find:

  • ECM apps being used to plug gaps in vendor ECM products and to enhance/replace existing product functionality with better functionality provisioned through the app;
  • ECM systems connecting with enterprise business systems such as SAP and Oracle eBusiness Suite through common ECM app adapters;
  • New, sleek and sophisticated user interfaces available for ECM products, leveraging multiple ECM apps and running entirely independent from the underlying ECM system;
  • A significant increase in the number of discrete business applications developed (due to economies of scale of develop once and sell many times) providing functionality such as Case Management, Freedom of Information discovery and Contract Management.

Some of the key implications that spring to mind should this all happen are:

  • End-user organisation – Reduced costs, faster development, greater agility, less risk, greater focus on business needs;
  • ECM Vendors – Pricing models will need to change, significant increase in SaaS and cloud based solutions, many ECM vendors will need to switch their focus to provide products that move up the information value chain;
  • System Integrators – Opportunity to re-use development effort from previous projects and assemble into more innovative solutions.

If you’ve taken the time to read this post then thank you, and I would really love to hear your views on what I’ve written.

Posted by: mcgratha | January 31, 2011

Social Media company valuations – am I missing something?

A lot has been written recently in both the press and blogosphere about the huge market valuations being touted for the new wave of social media companies. This has led to inevitable comparisons to the dotcom crash back in 2000-2001. From my perspective, although I can appreciate the tremendous value and opportunities that many of these social media companies offer, I find it hard to reconcile this against their incredulous valuations.

Am I missing something that would explain how these companies are being valued so highly? I am not a financial analyst, far from it, but from my outside perspective, it just feels wrong.

Let’s look (see diagram below) at the valuations being given to some of the key social media companies in the market and consider these in comparison to other well known and established companies.

Market valuations
What is apparent is the scale of the valuation of these social media companies, given that they have all only been in existence for a short number of years. For example, Facebook’s valuation of $50B is bigger than Aviva, on a par with Tesco and close to Barclays.

Looking at the revenue numbers for these social media companies, it is clear that their valuations are based on a figure that is many, many times their revenue. See table below.

Valuation times Revenue
To put this into perspective, both Apple and Microsoft have market evaluations roughly 4 times revenue. Google’s valuation is about 7 times revenue and Tesco is actually about 0.6 times revenue. The profits from many of these social media companies are also very low, especially when compared to their valuations.

Unlike the dotcom crash of 2000-2001, the social media companies discussed in this post are significantly more advanced than most of their failed counterparts in 2000-2001 – they have an established and very large customer base, and they are turning over a profit with lots of scope to grow. Nevertheless, their very high valuations would seem to be based on notional expectations of future growth and profit … unless I am missing something? It is not clear whether their existing user numbers and engagement will translate into a sustainable and longer term business model (that matches up against their valuation).

The barrier to entry in this market is low, especially with new cloud computing models that will allow companies to grow their business by ramping up computing capacity very quickly, aligned with customer demand, and only paying for what they use. For example, a few years ago we never heard of Groupon and now they are being valued at $15B, having turned acquisition offers down from both Yahoo and Google for $3B (Oct ’10) and $6B (Nov ’10) respectfully. New competitors can emerge very quickly in this market which could easily make a significant dent into the valuations discussed in this post.

Perhaps I am more sceptical than I should be. However, during the dotcom I worked for marchFIRST which went from zero to 10,000 employees back to zero in just over one year.

If something seems too good to be true …

Posted by: mcgratha | January 19, 2011

Future of ECM – the next 5 years

I’ve recently written a series of posts on the future of ECM, identifying 13 trends that I’ve observed:

I’ve deliberately avoided trying to make predictions in these posts, instead focusing on what I see as some of the key emerging trends. I fully appreciate that my view of the trends will be far from an exhaustive list, so I would love to hear other people’s views on where the ECM industry is going.

These trends have now been brought together and published by Logica in a paper called ‘Future of ECM – the next five years’.

Posted by: mcgratha | January 17, 2011

ECM in the Cloud

Continuing the theme of the Future of ECM … final trend #13 …

This final blog post in my outlook on the future of ECM trends over the coming 5 years focuses on the deployment of ECM into the cloud.

The implementation of ECM solutions, probably more so with Document Management, is increasingly focused on configuration of the product rather than customisation. This is largely fuelled by ECM vendors continuing to provide a much wider and higher quality suite of functionality out-of-the-box with their products.

As such, I expect that many organisations will start to question why they are procuring ECM software up front, as a capital expense, when they are unlikely to customise it very much – see my blog post ECM as a commodity. Subscription based licensing for ECM software (led by example by open source vendors) will become increasingly common over the next few years. This will lead to a greater demand for ECM Software-as-a-Service (SaaS) and Cloud based solutions.

Cloud computing is a style of computing where massively scalable IT related capabilities are provided as a utility service across the Internet to multiple customers, where capacity can be increased and decreased on demand in minutes (rapid elasticity) and the service is charged for using a ‘pay as you use’ model.

SaaS does not necessarily equate to cloud computing. It is more a subset of Cloud computing (as illustrated in the diagram below) but which can operate outside of Cloud computing as well. For example, an organisation might provide a business application as a distinct SaaS offering, but without the ability to rapidly scale capacity up and down on demand as required in Cloud computing.

Cloud Model

I expect that as more of the core ECM functionality is commoditised and the new CMIS standard starts to open up and simplify access to content held across multiple ECM repositories, ECM Platform as a Service (PaaS) implementations will become commonplace. This in turn will drive the availability of an increasingly number of SaaS solutions (running on top of PaaS) that offer a specific configuration of ECM deployed to customer needs, in addition to providing key business functionality areas such as Archiving and business applications such as Case Management.

As discussed in my blog post The spawning of a new construction boom, ECM solutions will increasingly be assembled from an ECM App library (utilising the CMIS standard) built on top of a core ECM engine and accessed as a ‘black box’. This approach lends itself to be naturally deployed into the cloud, as illustrated below.

ECM in the cloud

The economics of the Cloud will drive this too. The cost of running systems in the Cloud is significantly less than on-premises computing — typically up to 25% the cost of traditional on-premises computing. Gartner predict that by 2012, 20% of businesses will not directly own IT assets and will be using the cloud instead. In such scenarios, organisations would not need to care about what ECM product they are actually using, only the functionality that the Cloud SaaS provides. This allows them to focus on core business activities that drive value.

It is likely that many organisations will require the solution to run within a private cloud. This is due to security implications and that ECM and SasS applications will need to integrate into other business systems within the organisation. In some cases, the private cloud will also need to store information in specific country locations (and not ‘anywhere out there in the Cloud’) due to legal restrictions on where information can be stored.

Not every organisation will choose to use an external supplier to provision their private cloud. However, I do expect that most of them will increasingly take on the cloud principles within their own internal IT department to organise and manage their ECM services.

Posted by: mcgratha | January 10, 2011

Component Content Management

Continuing the theme of the Future of ECM … trend #12 out of 13 …

We are all used to working with bulky, monolithic documents where individual sections or paragraphs are often worked on by different people. Once published, a typical problem that often occurs with such documents is that some of the content in the document can quickly become out-of-date. An example is where content is copied and pasted into a section of a document from another source that has its own independent ownership structure and life-cycle. When the content in the source document changes, the newly published document, as a whole, becomes stale.

This blog post looks at some of trends in ECM over the next five years that will seek to address problems such as this, with a focus on Component Content Management.

General approach to solving this problem

To solve this problem requires a more component based content management approach to be taken where individual components of content are managed separately and assembled dynamically into a document for publication. When individual source components of content change, it is possible to alert the owners of documents that have used the source content to let them know that their document may now be stale and might therefore need to be reviewed and updated. It is also possible to automatically reassemble previously published documents again using the new source content, passing it on to the document owner for review, approval and republishing as part of a workflow process.

This approach should seem familiar to many readers as it is the usual approach taken for web content management, where the content of a web page is dynamically assembled from many content sources and updated on the fly as necessary. The same principles could be applied to document management. I believe that a component based content management approach to document management will become more popular over the coming five years. Indeed, Open Text recently acquired output management and document composition vendor StreamServe, which enables content to be assembled based on rules and distributed to multiple channels.

DITA

From a vendor independent perspective, Darwin Information Typing Architecture (DITA) is a good example of this trend. It defines a topic-based approach to modular authoring, enabling content to be flexibly reused, assembled and published across different formats and media. Introduced by IBM in 2001, DITA was then released to the open community and made an OASIS standard in 2005.

The DITA standard quickly gained acceptance as an excellent content management approach for technical documentation. There are now efforts underway by OASIS to adapt DITA as a modular content framework for enterprise business documents that go beyond technical content. These efforts have come about as more and more organisations (such as those in pharmaceutical and medical device manufacturers; healthcare service providers and hospitals; high-tech companies, financial institutions and governments) are moving to utilise structured XML content. A growing number of these organisations have come to believe that DITA not only provides the best basis from which to start addressing their requirements for narrative business documents, but one which will also help them to achieve their goals faster and in a standardised manner. One of the business drivers behind this initiative is that organisations want to leverage the intellectual property that is currently locked within narrative documents. They want to share and personalise the content for different audiences and channels by enabling much more powerful search and retrieval services based on a granular topics rather than book levels.

Going a step further with Object based storage models

In all likelihood, ECM will move towards an object based storage model for documents (and indeed all content) instead of storing documents in a traditional file system. This might be the final catalyst in the switch to a component based content management approach for the creation, management and distribution of all content. It will mean that there won’t be any real differentiation between structured or unstructured content anymore – all content will be absorbed into an object based storage model. For example, a typical blog post is made up of several objects; the blog text itself, attachments, comments, metadata and tags, all linked to each other and handled using a common approach.

A potential down-side of traditional ECM systems is that the metadata is stored in a database. Usually there is complete separation between the metadata and the original content item, with programming logic required to link the content item to its metadata. In an object-based storage model, objects will come with the ability to self-describe themselves (perhaps using RDF and OWL as described below in my blog post Semantonomics), with the associated semantic metadata URL-accessible. This will enable greater flexibility in how dynamic relationships are discovered between objects. For example, imagine you are reading a piece of content — a Word document, web page or blog. It could dynamically display a link to other related objects which could be within the ECM or even across other external sources. It could also display links to people who have skills associated with the content or people who have shown an interest in it. I talk more about this in my blog post The Collaborative Office.

Object-based storage models might require a move towards using XML databases as the underlying storage, described next.

XML Databases

XML database vendors, such as (www.marklogic.com), are gaining traction in the ECM marketplace with tools specifically optimised to deal with XML content. One of the primary reasons for this is that Relational databases (the backbone of all leading ECM products) are unsuitable for the storage and management of XML content. Applying relational theory to non-rectilinear data in XML table structures quickly becomes very difficult to optimise, performance suffers and storage requirements significantly increase. Many organisations, especially financial organisations, are increasingly using XML as the format of choice for much of their data. For example, Financial products Markup Language (FpML) is an information exchange standard, based on XML, and is heavily used in the Financial Services sector for electronic dealing and processing of financial derivative instruments. To address this increased usage of XML, XML database products that are tailored to work with XML content and optimised around the storage, manipulation and search of XML content have come into the marketplace.

Some mainstream ECM vendors have also started to explore the use of XML databases. For example, in 2007 EMC Documentum acquired the XML database vendor, X-Hive, rebranding it as Documentum xDB. This was subsequently incorporated into Documentum as an optional XML storage/management component to compliment (not replace) its traditional relational database storage functionality. Over the next five years, I expect most of the other leading ECM vendors to adopt a similar approach. Some of them might even take a bold step and move entirely to an XML database.

Posted by: mcgratha | January 6, 2011

Wider adoption of Information Rights Management

Continuing the theme of the Future of ECM … trend #11 …

This blog post looks at the increasing role that Information Rights Management (IRM) will play as a trend around ECM implementations over the coming years.

IRM is a technology that embeds digital rights into documents offering an additional means of safeguarding documents from unauthorised access and usage, especially when those documents are distributed outside of the organisation. For example, it is possible to define who is allowed access to the document, where they are allowed to access it, for how long they may access it, and what they are allowed to do with it (such as open, modify, print, copy and paste). Therefore, just because someone has a copy of a document doesn’t mean that they can do anything they want with it. On attempting to open the document, the credentials of the person are transparently validated against the IRM system, after which the embedded digital rights dictate what they are allowed do with the document. Considering the recent Wikileaks debacle, had the documents exposed on Wikileaks been encrypted with IRM technology, the impact in the media might have been just a fraction of what it was.

There are a number of IRM products on the market including Microsoft Rights Management, Oracle Information Rights Management, Adobe Rights Management and Gigatrust. Typically IRM comes as an add-on module that integrates into ECM systems. Most of them should automatically pick up the access privileges and rights that are configured within the ECM without having to duplicate them in the IRM. The digital rights can be embedded on upload and creation of the document in the ECM or dynamically when the document is published to, say, an Extranet site for distribution. Rapid centralised revocation of rights is possible and a full audit trail of access and usage is recorded.

Although IRM technology has been available for several years, its actual deployment in combination with ECM systems has been relatively low up until now. However, in a world of ever increasing document sharing and collaboration, a balance needs to be drawn between the wider distribution of documents and the need to maintain security over those documents. This is why I expect that IRM will play an increasingly important role in addressing this balance. It will become more widely adopted by organisations in the coming years.

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.

Join 33 other followers