Financial Industry Increases Private Cloud Spending in 2013

AmericanBanker.com reports that a recent survey conducted by PricewaterhouseCoopers indicates that 71 percent of financial service executives intend to invest more in cloud computing this year, an increase of 18 percent from 2012. Another 50 percent intend to invest in private cloud infrastructure, including virtualized storage and network equipment.
The reason for the increase is greater security and reliability of the cloud environment by cloud vendors. With encryption of data at rest and in transit, the enterprise cloud is secure enough to support confidential financial data and mission-critical applications, vital to the health of a financial company. For those outsourcing their IT, 64 percent responded that they viewed data security as an extremely serious risk to their organization.
Outsourced Cloud Risks
Outsourced Cloud Risks; Source: PwC
When it came to the three most important reasons for choosing an external private cloud service provider for their IT infrastructure, financial executives responded:
  1. Access superior technical skills to satisfy new requirement
  2. Faster delivery of IT solutions for business requirement
  3. Reduce total cost of IT department
Why Use a Private Cloud Service Provider
Why Use a Private Cloud Service Provider; Source: PwC
With an outsourced cloud service provider, their investment is in their people, critical infrastructure and up-to-date technology to provide enterprise-class, fast-performing clouds. The financial industry is not in the IT industry – by outsourcing their IT, they can focus on their business needs separate from the demands of operating a data center.
Similarly, with the private cloud, an outsourced IT vendor can provide faster deployment of new servers whenever the demand of the financial industry requires more support. Outsourcing to a private cloud infrastructure as a service (IaaS) provider also eliminates the upfront capital of building and maintaining a data center, and hiring certified staff with the latest technical knowledge.
As a Gartner market analysis reported earlier this year, cloud infrastructure as a service (IaaS) spending is anticipated to exceed $72 billion, with a compound annual growth rate (CAGR) of 42 percent. Compared to other cloud service markets, including platform as a service (PaaS), business process as a service (BPaaS), and software as a service (SaaS), IaaS is the fastest-growing segment. 

Original article

Informatica VIBE (virtual data machine) that makes you comfort with technological & infrastructural changes

An new era of data technoogies, informatica introduced VIBE that makes you comfort with technological & infrastructural changes.

informatica introducing VIBE as follows :

Change is inconvenient. It is also inevitable.

We live in the Information Age, yet data threatens to overwhelm us. New types of data, new technologies for managing that data, and new ways to use that information to drive business innovation are being introduced on a daily basis. But the complexity of accessing and managing all that data is increasing at an exponential pace.
You can’t afford to wait months to implement a new business idea. You can’t afford to recode each time you adopt a new approach or tap into a new type of data. And you don’t want to be crushed by the combined weight of new technologies and your existing infrastructure.
The only way to adapt quickly and cope in the face of accelerating change and complexity is through a secret ingredient that insulates you from all the changes in the data and the technologies. That secret ingredient is a virtual data machine (VDM). A VDM is an embeddable data management engine that separates instructions and specifications, which map out the business logic for handling data, from underlying execution technology.
Informatica offers the only VDM in the world. It’s called VibeTM and it’s the reason our customers can map once and deploy anywhere—in the cloud or on premise; in databases, applications, middleware, or on a Hadoop cluster; in batch, request/response, or real time. With Vibe, change is not your enemy—change is your competitive weapon.
Vibe can deploy your logic regardless of type, volume, or source of data; computing platform; or end user. Most important, if any of those elements change, Vibe lets you redeploy without recoding, redevelopment, or respecification. Vibe can be embedded into applications, middleware infrastructure, and devices—wherever you need to access, aggregate, and manage data.
The Vibe virtual data machine architecture, with a transformation library to define logic, the optimizer to deploy in the most efficient manner, the executor as the run-time engine for physical execution, and connectors to data sources.
With Vibe as the foundation, you can then layer on services from a fully integrated information platform to transform raw data into information that provides insight and value. Because it is powered by Vibe, the Informatica Platform allows you to easily combine these services on the fly to meet your specific business requirements. The Informatica Platform is the only platform that provides the tools and capabilities for the simplest entry-level uses to the most complex cross-enterprise initiatives. And it is the most proven platform, with 5,000 customers who rely on Informatica to harness the full potential of their information to compete in today’s interconnected information age.
Our vision is simple: with Vibe, we have created the only information platform that is architected to harness change so that no matter what the futures holds, you can always put your information potential to work.

Source : http://www.informatica.com/us/vision/vibe/

Facebook and MicroStrategy Partner for New Big Data Analytics

During MicroStrategy's annual European user conference in Barcelona, Guy Bayes, head of Enterprise BI at Facebook, spoke on the topic of Big Data at Facebook. He discussed the technical challenge caused by the massive scale of the data generated by its 1.1+ billion members. To interactively analyze this data from any dimension, Facebook engaged MicroStrategy to create and test a new massively-parallel in-memory analytic technology which MicroStrategy calls "EMMA" (for Extended MPP Memory Architecture). The project has been underway for one year and the current prototype configuration is successfully analyzing just over half of the Facebook dataset and is achieving an average response time of under 5 seconds. At full configuration, the system will run in a cluster of hundreds of commodity servers containing over 10TB of data in memory. Mr. Bayes suggests that, once this massive interactive analytic technology is commercialized by MicroStrategy for general availability by other companies, it will have been thoroughly and strenuously tested against the largest datasets anywhere, and should enter the market as a very mature and high performance technology.



Talend Interview Questions

  1. What is Talend ?
  2. What is difference between ETL and ELT components of Talend ?
  3. How to deploy talend projects ?
  4. What are types of available version of Talend ?
  5. How to implement versioning for talend jobs ?
  6. What is tMap component ?
  7. What is difference between tMap and tJoin components ?
  8. Which component is used to sort that data ?
  9. How to perform aggregate operations/functions on data in talend ?
  10. What types of joins are supported by tMap component ?
  11. How to schedule a talend job ?
  12. How to runs talend job as web service ? 
  13. How to Integrate SVN with Talend ? 
  14. How to run talend jobs on Remote server ? 
  15. How to pass data from parent job to child jobs through trunjob component ?
  16. How to load context variables dynamically from file/database ?
  17. How to run talend jobs in Parallel ?
  18. What is Context variables ? 
  19. How to export a talend job ? 
  20. What is the purpose of Talend Runtime ?
  21. How to use Talend job conductor ?
  22. How to send email notifications with job execution status ?
  23. How to implement Full outer join in Talend ?
I will add more questions.

keep on following..

For Talend training ,  please contact us on
Email : sureshreddy1.9989241627@gmail.com
Phone : 09989241627 / 07757886316

How Will the Future of Big Data Impact the Way We Work and Live?

The semantic web, or web 3.0, is often quoted as the next phase of the Internet. In a previous post I discussed the impact of big data on the semantic web and I mentioned that the semantic web will enable all humans as well as all internet connected devices to communicates with each other as well as share and re-use data in different forms across different applications and organizations in real-time. The future of big data takes full advantages of the semantic web and it will have a vast impact on organisations and society.
Jason Hoffman, CTO of Joyent, predicts that the future of big data will be about the convergence of data, computing and networks. The PC was the convergence of the computing and networks, while the convergence of computing and data will enable analysis performed directly on Exabytes of raw data enabling ad hoc questions to be asked on extremely large data sets.
Artificial intelligence that will match human intelligence will allow us to ask questions and finding answers more easily by simply asking natural questions to computers. Already Japanese scientists have built a super-computer that mimic the brain cell network and reached 1% of brain capacity. To achieve this that simulated a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. The process took 40 minutes, to complete the simulation of 1 second of neuronal network activity in real, biological, time. In the coming years these super-computers will become the standard. At the moment, users still need to know what you want to know, but in a future with such super-computers it is all about the things that you don’t know.
The real benefits will be when organizations do not have to ask questions anymore to obtain answers, but simply find the answer to question they never could have thought of. Advanced pattern discovery and categorization of patters will enable algorithms to perform the decision making for organizations. Extensive and beautiful visualizations will become more important and help organizations understand the brontobytes of data.
Big data scientists will be in very high-demand in the coming decades, as McKinsey also predicted in 2011 already. The real winners in the big data startup field however, will be those companies that can make big data so easy to understand, implement and use that big data scientists are not necessary anymore. Large corporations will always employ big data scientists, but the much large market of Small and Medium sized Enterprises do not have the money to hire expensive big data scientists or analysts. Those big data startups that enable big data for SME’s without the need to hire big data experts will have a huge competitive advantage.
The algorithms developed by those big data startups will become ever smarter, smartphones will become better and in the future anyone will have a supercomputer in its pocket that can perform daunting computing tasks in real-time and visualize it on the small screen in your hand. And with the Internet of Things and trillions of sensors, the amount of data that needs to be processed by these devices will grow exponentially.
Big data will only becomes bigger and brontobytes will become common language in the boardroom. Fortunately, data storage will also become more widely available as well as cheaper in order to cope with the vast amount of data. Brontobytes of data will become so common in boardrooms, that eventually the term big data will disappear again and big data will become just data again.
However, before we have reached that stage, the growing amount of data that is processed by companies and governments will create a privacy concern. Those organizations that stick to the ethical guidelines will survive, other organizations that will take privacy lighthearted will disappear, as privacy will be self-regulating. The problem will be however with the governments as citizens cannot simply move away from their government. Large public debates about the effects of big data on consumer privacy will be inevitable and together we have to ensure that we do not end-up in Minority Report 2.0 or in a ‘1984-setting’.
The future of big data is still unsure, as the big data era is still unfolding, but it is clear that the changes ahead of us will transform organizations and societies. Big data is here to stay and organizations will have to adapt to the new paradigm. Organization might be able to postpone their big data strategy a little bit, but we have seen that organizations that already have implemented a big data strategy, do outperform their peers. Therefore, start developing your big data strategy, as there is no time to waste if your organization also wants to provide products and services in the upcoming big data era.


Original article

Recorded Future

Predictive analytics are becoming more important as they are the most valuable analysis within big data as they help predict what someone is likely to buy, visit, do or how someone will behave in the (near) future. It uses a variety of different data sets such as historical, transactional, social or customer profile data to identify risks and opportunities. Recorded Future is a big data startup, which was founded in 2009, that focuses solely on the art of predictive analytics.
They have developed linguistic and statistical algorithms that can extract information from temporal signals on the web. They scan tens of thousands different websites ranging from high-quality news publications, public niche sources, government websites, blogs, financial databases etc to identify references to entities, such as people, groups or locations, and events in the future. The algorithms can detect different time periods when the events will occur and deliver that information to the user, including sentiment analysis on the topic.
They claim to unlock the predictive power of the web with the world’s first temporal analytics engine. They work for Fortune 500 companies, advances financial institutions and government agencies from around the world. These organisations use Recorded Future as a Software-as-a-Service or developers can tap into the API that they have developed. This API gives access to the index for analysis of online media flow that spans blogs and Twitter to mainstream news to government filings all collected in real time from public sources around the world.
Recorded Future is headquartered in Cambridge, MA, an has offices in Göteborg, Sweden and Arlington, VA. In 2009 it was founded by Erik Wistrand, Staffan Truvé and Christopher Ahlberg. Since then it has received over $ 20 million in funding from Google Ventures, IA Ventures, In-Q-Tel, Atlas Venture and Balderton Capital.
Recorded Future takes a very interesting approach to big data and to give organisations the predictive insights that help them make better decisions. Co-founder Christopher Ahlberg was named among the World’s Top 100 Young Innovators by MIT Technology Review and received the TR100 award in 2002. He also has been granted two software patents, and has multiple patents pending.


Original article