« Scanning the Search Vendor Landscape | Main | Punchlist for a “483” – Lessons Learned »

Project Issues – Enterprise Search

We have resumed following my current Enterprise Search Implementation project for a Medical Publisher. I’m going to bring you back up to speed with a review of the project issues and status in Mid March.

  1. Does anyone out there sell what we want? Initial review of the vendor landscape was confusing, to say the least. Package prices ran from two thousand to half a million. Many confusing claims and terminology. A number of recent mergers and re-namings had also muddied the waters. This is characteristic of an emerging, unconsolidated Marketplace, and a tough arena in which to make decisions.
  2. What is it that we want, again? We have in place a pretty good document set around requirements – Use cases of our major user categories; Ranked Benefits Analysis; a Business Requirements Specification that brings it all together. But what is important – what are the differentiators in the marketplace that will best match our requirements?
  3. Can we afford what we want? We had early-on restricted the scope from a more inclusive Content Management and Search to Enterprise Search. This helps a lot, but the high-end Enterprise Search packages are pretty pricey.
  4. What are the barriers to implementation? So we buy a package, and some installation and integration services. What are the danger points? What will come out of the box, what will we have to struggle for?
  5. What are the organizational barriers? My Medical Publisher Client has a number of publishing lines, each with unique content, content origination processes, and document and data formats. How long will it take to include everything? Who goes first? Why?
  6. To what extent can we minimize Metatagging of our document collection? A key issue is the metatags on our documents that describe what each document is about. The issue is that these tags do not exist. Most vendors say the tags are not needed, that the search software does this for you. When pressed, some vendors admit that “of course” results are better with metatags. Tagging the document set is a big barrier for the client. Many vendors tout a mystery mix of Bayesian analysis and neural networks as their “special sauce” to eliminate the need for a human to describe the content of each document. Can we trust these claims?
  7. Lastly, the evaluation question. How do we evaluate the competing, confusing, and contradictory claims of the vendors? We can’t fully implement each package, and evaluate it. What is the minimum effort that will produce the maximum information?

This is a summary of the issues that I faced in moving the project ahead in March. In terms of status, it was time to go to the marketplace and understand the vendor offerings.

Posted on Monday, January 9, 2006 at 08:53PM by Registered CommenterLarry Cone in , | CommentsPost a Comment

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.