Why The “Star Trek Computer” Needs Open Data…And Scotty, Too

By

Scotty

I love this post from Fogbeam Labs. Here’s a bit:

So given that, what can we say about the eventual development of something we can call “The Star Trek Computer”? Right now, I’d say that we can say at least two things: It will be Open Source , and licensed under the Apache Software License v2. There’s a good chance it will also be a project hosted by the Apache Software Foundation.

Their rationale? ASF provides an awesome array of advanced technologies, in everything ranging from NLP, information extraction and retrieval, machine learning, Semantic Web, and on and on. It’s like a free, all-you-can-eat buffet! (er, Star Trek food synthesizer?)

I share their enthusiasm. We use many of these technologies at Primal.

But this is where their science fiction story starts to lose me:

Of course, you don’t necessarily need a full-fledged “Star Trek Computer” to derive value from these technologies. You can begin utilizing Semantic Web tech, Natural Language Processing, scalable machine Learning, and other advanced computing techniques to derive business value today.

We often meet product developers and entrepreneurs looking to build next-generation intelligent solutions.

If these advanced technologies are available for free, why not just jump in and start building?

Who’s Your Scotty?

The first observation is that these advanced technologies are advanced technologies. Even after you get them stood up, there’s a considerable challenge in understanding how to use them properly.

The business models and professional services offered by open source integrators are premised on this complexity. This is tough stuff.

When you’re budgeting, make sure you factor the cost for your “Scotty”. Experts in this area don’t come cheap…

Where Are You Getting Your Dilithium Crystals?

Even if you have the expertise to operate these advanced technologies, you need to make sure you have your supply of dilithium crystals to power it. (OK, I’m probably stretching the metaphor to a breaking point, but…)

These advanced technologies need an underlying knowledge model. Whether you’re hand-rolling this model or inducing it from large amounts of representative data, it’s a huge challenge and prone to failure .

Often times, linked open data is proposed as a source. This supply, much like open source technologies, is awesome, if your application is targeting an established well-defined domain. But rarely is the specific knowledge you need already represented, particularly if you’re trying to differentiate or personalize your offering.

The challenges in acquiring and transforming this data into a form you can use demands high expertise in the very advanced technologies discussed above!

Primal is Science Simplified, Not Science Fiction

Primal powers a truly personalized experience for your consumers. Our cloud-based data service creates rich user-specific knowledge models, as interest graphs. Primal then aggregates and filters Web content from any source using your individual interest graphs.

The interests data we generate on your behalf is open and portable. You can be up and running on our product in a matter of minutes, not months.

It’s free to use during your evaluation and product development, and you’ll know exactly how much it costs as you ramp-up from there.

Perhaps most importantly, it comes preloaded with our Scotty and all the dilithium crystals you’ll need to succeed.

Engage! (Alright, I’ll never speak of Star Trek again…)