In Memory Computing Landscape: Its Role and Advantages - Wednesday, April 01, 2015

  • By Mohamed Usama Mansoor
  • 1 Apr, 2015

Traditionally, big data is mostly read from disks and processed. However, most big data systems are latency bound, which means often the CPU sits idle waiting for data to arrive. This problem is more prevalent with use cases like graph searches that need to randomly access different parts of datasets. In-memory computing proposes an alternative model where data is loaded or stored in-memory and processed instead of processing them from the disk.

Although such designs cost more in terms of memory, sometimes resulting systems can have faster order of magnitudes (e.g. 1000X), which could lead to savings in the long run. With rapidly falling memory prices, this difference is reducing by the day. Furthermore, in-memory computing can enable use cases like ad hoc analysis over a large set of data that was not possible earlier.

In this webinar, Srinath Perera, director of research at WSO2 will discuss

  • Overview of in-memory technology
  • Common in-memory computing patterns
  • WSO2 technologies like complex event processing that can be used to build in-memory solutions.
Presenter
Srinath Perera Director - Research, WSO2 Srinath is Director of Research at WSO2. In his current role, he overlooks the overall WSO2 platform architecture alongside WSO2 CTO Paul Fremantle. He specializes in Web services and distributed systems, specifically working with aspects of data, scale and performance. Srinath is a co-founder of Apache Axis2, an elected member of the Apache Software Foundation, and a Project Management Committee (PMC) member. He has been involved with the Apache Web Services project since 2002, and he is a committer on several Apache open-source projects, including Apache Axis, Axis2, and Geronimo.
Share
Share your WSO2 Webinar experience with us by taking this quick survey!