- Design and implement analytic and data processing algorithms on large scale data sets across next generation, distributed data lake platforms
- Participate in the renaissance of the functional programming paradigm within the industry’s hottest analytics framework
- You will transform user stories into production software.
- Code a set of Scala routines in Spark to calculate physician efficiency or patient centric clinical quality of care.
- Build a REST service to update a value based contract definition.
- Debate and recommend the use of SQL on Hadoop, NoSQL, or MPP data storage architectures.
- Identify and evaluate commercially available or open source frameworks to leverage for non-differentiating product capabilities.
- Help the infrastructure specialists to optimize the configuration and performance of the analytics processing cluster.
- Agile and Scrum delivery is familiar.
- Experience with dev/test/production environments at AWS. Spark, Hadoop (HDFS), S3, Scala, Play, Akka, Java, Linux is the primary stack.
- Practical experience building Spark applications with RDDs, SparkSQL, MLlib, and R a big plus.
- Working knowledge of data storage architectures, such as RDBMS, Object Store, SQL on Hadoop, MPP, NoSQL, and Document.
- Experience building JVM hosted REST services in Scala with JSON manipulation a plus.
- You’ve been in the trenches delivering commercial applications conquering SaaS, multi-tenancy, batch and real-time analytics, decision support, and business intelligence analysis and reporting.