Inhoudsopgave:
If your organization is about to enter the world of big data, you not only need to decide whether Apache Hadoop is the right platform to use, but also which of its many components are best suited to your task. This field guide makes the exercise manageable by breaking down the Hadoop ecosystem into short, digestible sections. Youâll quickly understand how Hadoopâs projects, subprojects, and related technologies work together. Each chapter introduces a different topicâsuch as core technologies or data transferâand explains why certain components may or may not be useful for particular needs. When it comes to data, Hadoop is a whole new ballgame, but with this handy reference, youâll have a good grasp of the playing field. Topics include: Core technologiesâHadoop Distributed File System (HDFS), MapReduce, YARN, and Spark Database and data managementâCassandra, HBase, MongoDB, and Hive SerializationâAvro, JSON, and Parquet Management and monitoringâPuppet, Chef, Zookeeper, and Oozie Analytic helpersâPig, Mahout, and MLLib Data transferâScoop, Flume, distcp, and Storm Security, access control, auditingâSentry, Kerberos, and Knox Cloud computing and virtualizationâSerengeti, Docker, and Whirr |