Now that
IBM has sunk its teeth into the big data market, it wants companies to
start using its technology correctly in order to get the most out of
their raw information. The vendor’s latest infographic outlines 12 best practices that users should take into account in order to realize this goal.
Big Blue places management and disaster recovery at the top of the
chart: organizations must gain visibility into their infrastructure and
prepare for worse-case scenarios if they want to get their big data
under control. Achieving operational efficiency is an even more
complicated task.
IBM lays out the prequisites for this next stage: an organization
must be able to scale-up rapidly and cost-efficiently; it must be able
to do the same with backup; and all applications and processes need to
be optimized to meet business requirements. The Big Data leader also
stresses the need for tight security, a clearly defined set of policies
governing the usage of data, and the ability to effectively audit the
entire stack.
Replication is number nine on the list, followed immediately by
virtualization: users need reliable access to data, while admins require
tools that can help them make use of all the available resources on the
network. IBM’s final two best practices are archiving, for the
purpose of future analysis, and constant availability.
Big Blue’s stance is that that Hadoop, real-time analytics and
leading NoSQL platforms still have a long way to go before becoming
truly viable for businesses, but it has no intention of waiting it out.
The company launched a new series of Hadoop-powered SMB servers just last week, a day after announcing the acquisition of Star Analytics.
Original Source
Taming Big Data: 12 Best Practices for Analysts
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment