12 Jul What is Big Content?
And Why Does it Matter?
Editor’s note – this is part 1 of a 2 part series where we explore the world of Big Content. Check back in two weeks for the second part of this series where Principal Engineer, Travis Whelan, will discuss strategies for wrangling your content.
If you’ve heard RhinoDox CEO, Justin Ullman speak, you’ve heard the term “Big Content”. And if you haven’t, you should check out this demo video. But even after listening to our descriptions of “Big Content”, you might still think, “I’ve heard of big data, but big content? What are they talking about?”
The term “Big Data” surfaced in 2005 when O’Reilly Media’s Roger Mougalas first used it to define the massive scale of data generated by modern technology companies like Facebook, Amazon and Netflix. The need to process data at scale when wrangling millions of users interacting with a product many times a day birthed a new technology discipline, requiring a new set of tools and expertise. In the subsequent years, organizations large and small began to recognize that they too had “big data problems”. This wasn’t the result of a constant flow of massive amounts of data, but of years and years of historical data. Now, every major company has teams of data scientists working to make sense of all the data, and drive value from it.
But What Does This Have to do With Content Management?
Think about all the content you generate on a daily basis. You have documents (both paper and electronic), emails, notes, photos, blogs, websites, phone calls, hallway conversations – just to name a few. Now, multiply that by everyone in your organization, then again by all the days in the year, and all the years your organization has existed.
That’s Big Content.
The typical organization has years, not to mention, terabytes of emails, documents and other content collecting dust in a variety of mediums – the inboxes of employees past and present, a legacy ECM, some network file system (you probably call this your H drive, U drive or Z drive), filing cabinets or boxes in a warehouse or stacks of notebooks that you’ll probably throw away someday. There’s valuable information filed away and forgotten in all of these places.
In reality, most organizations don’t have Big Data problems, they have Big Content problems, and many data science teams are ignoring content in their data strategies. The industry needs a paradigm shift. It’s time to recognize that the minute you “file away” a piece of content, it stops providing value to your organization. Businesses will need to deploy technology solutions to the “Big Content Problem” in order to extract valuable insights from decades of content that they’ve produced in order to maintain a competitive edge.
In part two of this blog, I will discuss some strategies for dealing with your big content problems, and introduce some concepts to help you wrangle all of your content and drive value for your organization. Check back on July 26 for that piece. And in the meantime, check out my blog on the future of Content Management and Artificial Intelligence.
Travis Whelan is the Principal Engineer at RhinoDox. He has been in the software industry for more than 15 years, and in the ECM industry for more than 10. When he’s not a busy Rhino, Travis is pretending he’s in the Chopped kitchen serving up creative meals to his family and friends. It’s unknown whether or not any of them have actually enjoyed his cooking.