'Integrating hadoop yarn with mesos infra
I have created a hdfs cluster . I have to configure yarn so as to allow yarn application master to be able to create containers for job processing on the mesos cluster on demand .
how can i integrate the hdfs cluster with the mesos infra so that it can create containers on mesos ?
i need to figure out a way to run the containers created by the application master on another resources apart from the YARN cluster ( a client node, or edge node, or the resources spun through mesos infra ) . basically , i have to create an on-demand ,compute only cluster which can run the yarn apps once yarn is used up
Solution 1:[1]
Mesos was created as a more generic version of YARN, they're not really intended to be used together (YARN apps cannot be deployed to Mesos). Spark apps are about the only process in the whole Hadoop ecosystem that can be deployed (independently) to both.
Worth pointing out that Mesos was moved to Apache Attic (edit: and quickly moved out, it seems, but there's been no releases since then). In other words, it's seen as deprecated. With a bit of configuration, YARN can run plain Docker containers, if that's what you're using Mesos for. Apache Twill was a library for creating distributed applications on top of YARN, but that's also in the Apache Attic (and stayed).
You also don't need special configurations to communicate with HDFS from Mesos applications, only the hadoop-client dependency and a configured core-site.xml and hdfs-site.xml file
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
