- This specific Code.Learn program lasts 3 weeks (3 Fridays & 3 Saturdays) with 35 hours of lectures and hands-on exercise on real life case studies and projects.
Key Objectives – Curriculum
The core perspectives of this program will be to present, explore and adequately cover with extended real-life business case studies & industry scenarios the following areas:
- Big data problem, dimensions and application areas
- Big data modeling, data management and systems
- Hadoop ecosystem (HDFS, MapReduce, YARN, and Common) & key components (Spark, Hive, Pig, Oozie and Sqoop and others)
- Kafka (real-time data pipelines and streaming apps)
- Big data Integration and processing
- Mining of Massive Datasets
The lessons can be carried out:
- Inside a physical classroom with an instructor,
- In an online environment as a virtual classroom, with live connection with the instructor through video conferencing; or lastly,
- A combination of both physical and online.
The method of teaching will depend on the current conditions, and also on the participants’ preferences.
Regarding online, the instructor provides the taught material through screen sharing, live broadcast, or by working on the cloud where attendees can see and interact with everything in real-time. Attendees themselves can seamlessly and actively participate and ask questions, as they would in a physical classroom. Additionally, they can collaborate in team projects and deliver assignments and hands-on projects that the instructor can see and provide feedback easily and without delays.
Education & Experience
Computer scientists, software engineers, developers, data engineers, database engineers and data integration developers are welcome to participate to this code.learn program and unlock the full potentiality of the topics taught by upskilling their future career.
Participants are expected to have hands-on experience with databases (Relational) and knowledge of programming basics.