Hadoop – The Definitive Guide 4e

Regular price €64.99
A01=Tom White
Age Group_Uncategorized
Age Group_Uncategorized
Author_Tom White
automatic-update
Category1=Non-Fiction
Category=UNK
COP=United States
Delivery_Delivery within 10-20 working days
eq_bestseller
eq_computing
eq_isMigrated=2
eq_nobargain
eq_non-fiction
hadoop mapreduce google database java cluster cloud distributed computing cloud computing hive sqoop security hfs hbase zookeeper avro crunch flume YARN hadoop 2 case studies
Inc
Language_English
PA=Available
Price_€50 to €100
PS=Active
softlaunch
USA

Product details

  • ISBN 9781491901632
  • Weight: 1260g
  • Dimensions: 180 x 236mm
  • Publication Date: 05 May 2015
  • Publisher: O'Reilly Media
  • Publication City/Country: US
  • Product Form: Paperback
  • Language: English
Delivery/Collection within 10-20 working days

Our Delivery Time Frames Explained
2-4 Working Days: Available in-stock

10-20 Working Days: On Backorder

Will Deliver When Available: On Pre-Order or Reprinting

We ship your order once all items have arrived at our warehouse and are processed. Need those 2-4 day shipping items sooner? Just place a separate order for them!

Ready to unlock the power of your data? With the fourth edition of this comprehensive guide, you'll learn how to build and maintain reliable, scalable, distributed systems with Apache Hadoop. This book is ideal for programmers looking to analyze datasets of any size, and for administrators who want to set up and run Hadoop clusters. You'll find illuminating case studies that demonstrate how Hadoop is used to solve specific problems. This edition includes new case studies, updates on Hadoop 2, a refreshed HBase chapter, and new chapters on Crunch and Flume. Author Tom White also suggests learning paths for the book.Store large datasets with the Hadoop Distributed File System (HDFS) Run distributed computations with MapReduce Use Hadoop's data and I/O building blocks for compression, data integrity, serialization (including Avro), and persistence Discover common pitfalls and advanced features for writing real-world MapReduce programs Design, build, and administer a dedicated Hadoop cluster - or run Hadoop in the cloud Load data from relational databases into HDFS, using Sqoop Perform large-scale data processing with the Pig query language Analyze datasets with Hive, Hadoop's data warehousing system Take advantage of HBase for structured and semi-structured data, and ZooKeeper for building distributed systems
Tom White has been an Apache Hadoop committer since February 2007, and is a member of the Apache Software Foundation. He has written numerous articles for O'Reilly, java.net and IBM's developerWorks, and has spoken at several conferences, including at ApacheCon 2008 on Hadoop.