Maybe text mining SHOULD be playing a bigger role in data warehousing
When I chatted last week with David Bean of Attensity, I commented to him on a paradox:
Many people think text information is important to analyze, but even so data warehouses don’t seem to wind up holding very much of it.
My working theory explaining this has two parts, both of which purport to show why text data generally doesn’t fit well into BI or data mining systems. One is that it’s just too messy and inconsistently organized. The other is that text corpuses generally don’t contain enough information.
Now, I know that these theories aren’t wholly true, for I know of counterexamples. E.g., while I’ve haven’t written it up yet, I did a call confirming that a recently published SPSS text/tabular integrated data mining story is quite real. Still, it has felt for a while as if truth lies in those directions.
Anyhow, David offered one useful number range:
If you do exhaustive extraction on a text corpus, you wind up with 10-20X as much tabular data as you had in text format in the first place. (Comparing total bytes to total bytes.)
So how big are those corpuses? I think most text mining installations usually have at least 10s of thousands of documents or verbatims to play with. Special cases aside, the upper bound seems to usually be about two orders of magnitude higher. And most text-mined documents probably tend to be short, as they commonly are just people’s reports on a single product/service experience – perhaps 1 KB or so, give or take a factor of 2-3? So we’re probably looking at 10 gigabytes of text at the low end, and a few terabytes at the high end, before applying David’s 10-20X multiplier.
Hmm – that IS enough data for respectable data warehousing …
Obviously, special cases like national intelligence or very broad-scale web surveys could run larger, as per the biggest Marklogic databases. Medline runs larger too.
Comments
5 Responses to “Maybe text mining SHOULD be playing a bigger role in data warehousing”
Leave a Reply
I have a much shorter theory:
Numbers make colorful graphs.
And graphs get you budgets.
I’m all for promoting scalable software for information access (since its sales help pay my bills), but I’m a bit skeptical on the 10-20x multiplier. A big motivation for performing information extraction / text mining is to *reduce* the text to a form which a higher signal-to-noise ratio. There will be overhead, but such a large blow-up suggests replication rather than mining.
Remember, we’re talking about exhaustive extraction here.
http://www.texttechnologies.com/2007/10/05/when-to-use-exhaustive-extraction/
Wow, what a bunch of buzzwords. By the way, you should learn a little bit about how text mining and information extraction are two different thigs.
The marriage of Structured databases to Text or other syntactic/contextually related data is becoming a requirement for companies in the webspace. Much of the data by such companies is formless, or is being built at a rate which obviates the use of the traditional lifecycle management systems which include structured data analysis and data modeling. Time To Market is driving those processes out of the delivery cycle, making the job of doing analytics after the fact, very very difficult indeed. This problem is also manifested by the resistance from web companies to invest in software systems using legacy licensing schemes.