Businesses want to execute an analytical job at scale in Hadoop, but different parts of that job are potential candidates for specific execution engines so that the job performs under optimal conditions. In addition, past engines such as classic Map Reduce are potentially giving way to new ones such as Spark. This talk will demonstrate how you can leverage the Datameer application to hide the complexity of choosing the right execution engine for an analytical job at scale in Hadoop, and how Spark fits into this context.