{"id":1302,"date":"2016-05-31T19:30:18","date_gmt":"2016-05-31T17:30:18","guid":{"rendered":"https:\/\/elkano.org\/blog\/?p=1302"},"modified":"2016-05-31T14:33:41","modified_gmt":"2016-05-31T12:33:41","slug":"ceph-sata-ssd-pools-server-editing-crushmap","status":"publish","type":"post","link":"https:\/\/elkano.org\/blog\/ceph-sata-ssd-pools-server-editing-crushmap\/","title":{"rendered":"CEPH: SATA and SSD pools on the same server without editing crushmap"},"content":{"rendered":"<p>I had some free slots in two on my ceph nodes and I used them to set a new SSD only pool. Because the ssd OSDs are sharing the server with an existing SATA pool we have to do some additional steps. There are some good posts out there that explain how to set up two pools sharing the same server, but you have to edit manually the ceph crushmap. Although it&#8217;s not very difficult to do it in that way I achieve the same goal setting the crush location parameter for those OSDs. I&#8217;ve tested it in Hammer release.<\/p>\n<p>First create a new root bucket for the ssd pool. This bucket will be used to set the ssd pool location using a crush rule.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\nceph osd crush add-bucket ssds root\r\n<\/pre>\n<p>We already have some servers with SATA OSDs in production, but we have to add two new host buckets for the faked hostnames that we are going to use to set the ssd OSDs.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\nceph osd crush add-bucket ceph-node1-ssd  host\r\nceph osd crush add-bucket ceph-node2-ssd  host\r\n<\/pre>\n<p>Move the host buckets to the ssds root:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\nceph osd crush move ceph-node1-ssd root=ssds\r\nceph osd crush move ceph-node2-ssd root=ssds\r\n<\/pre>\n<p>In the ceph configuration file (ceph.conf) set the <strong>crush location<\/strong> for the SSD OSDs. This is necessary because the default location is always the hostname obtained with the command <strong>hostname -s  <\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\n[osd.35]\r\nhost =  ceph-node1\r\nosd_journal = \/dev\/disk\/by-id\/ata-INTEL_SSDSC2BB016T6_BTWA543204R11P6KGN-part1\r\ncrush_location =  root=ssds host=ceph-node1-ssd\r\n<\/pre>\n<p>You can check the location of the osd running this command:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\n$ ceph-crush-location --id 35 --type osd\r\n root=ssds host=ceph-node1-ssd\r\n<\/pre>\n<p>For each new ssd OSD move the osd to ssds root:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\nceph osd crush add 35 1.5 root=ssds\r\nceph osd crush set osd.35 1.5 root=ssds host=ceph-node1-ssd\r\n<\/pre>\n<p>Create a new SSD pool:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\nceph osd pool create ssdpool 128 128\r\n<\/pre>\n<p>Crate a crush rule in the ssds root:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\nceph osd crush rule create-simple ssdpool ssds host\r\n<\/pre>\n<p>Finally assign the new rule to the ssdpool:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\n$ ceph osd pool set ssdpool crush_ruleset 4\r\n set pool 5 crush_ruleset to 4\r\n<\/pre>\n<p>Got! Now, we have a new only SSD pool:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-linenumbers=\"false\" data-enlighter-theme=\"enlighter\" data-enlighter-language=\"xml\">\r\n$ ceph osd tree\r\nID  WEIGHT   TYPE NAME                       UP\/DOWN REWEIGHT PRIMARY-AFFINITY \r\n-25  3.00000 root ssds                                                    \r\n-26  1.50000     host ceph-node1-ssd                                          \r\n 35  1.50000         osd.35                       up  1.00000          1.00000 \r\n-27  1.50000     host ceph-node2-ssd                                          \r\n 36  1.50000         osd.36                       up  1.00000          1.00000 \r\n-21 48.22978 root sata                                                    \r\n-24  6.50995     host ceph-node1                                              \r\n  1  0.92999         osd.1                        up  1.00000          1.00000 \r\n  5  0.92999         osd.5                        up  1.00000          1.00000 \r\n 16  0.92999         osd.16                       up  1.00000          1.00000 \r\n 17  0.92999         osd.17                       up  1.00000          1.00000 \r\n 18  0.92999         osd.18                       up  1.00000          1.00000 \r\n 19  0.92999         osd.19                       up  1.00000          1.00000 \r\n 20  0.92999         osd.20                       up  1.00000          1.00000 \r\n-22  6.50995     host ceph-node2                                              \r\n 21  0.92999         osd.21                       up  1.00000          1.00000 \r\n 22  0.92999         osd.22                       up  1.00000          1.00000 \r\n 23  0.92999         osd.23                       up  1.00000          1.00000 \r\n 24  0.92999         osd.24                       up  1.00000          1.00000 \r\n 25  0.92999         osd.25                       up  1.00000          1.00000 \r\n 26  0.92999         osd.26                       up  1.00000          1.00000 \r\n 27  0.92999         osd.27                       up  1.00000          1.00000 \r\n -7 13.29996     host ceph-node3                                              \r\n  0  1.89999         osd.0                        up  1.00000          1.00000 \r\n  6  1.89999         osd.6                        up  1.00000          1.00000 \r\n  9  1.89999         osd.9                        up  1.00000          1.00000 \r\n 11  1.89999         osd.11                       up  1.00000          1.00000 \r\n 14  1.89999         osd.14                       up  1.00000          1.00000 \r\n 15  1.89999         osd.15                       up  1.00000          1.00000 \r\n  2  1.89999         osd.2                        up  1.00000          1.00000 \r\n-20  6.50995     host ceph-node4                                              \r\n 28  0.92999         osd.28                       up  1.00000          1.00000 \r\n 29  0.92999         osd.29                       up  1.00000          1.00000 \r\n 30  0.92999         osd.30                       up  1.00000          1.00000 \r\n 31  0.92999         osd.31                       up  1.00000          1.00000 \r\n 32  0.92999         osd.32                       up  1.00000          1.00000 \r\n 33  0.92999         osd.33                       up  1.00000          1.00000 \r\n 34  0.92999         osd.34                       up  1.00000          1.00000 \r\n-14 15.39998     host ceph-node5                                              \r\n  3  2.20000         osd.3                        up  1.00000          1.00000 \r\n  4  2.20000         osd.4                        up  1.00000          1.00000 \r\n  7  2.20000         osd.7                        up  1.00000          1.00000 \r\n  8  2.20000         osd.8                        up  1.00000          1.00000 \r\n 10  2.20000         osd.10                       up  1.00000          1.00000 \r\n 12  2.20000         osd.12                       up  1.00000          1.00000 \r\n 13  2.20000         osd.13                       up  1.00000          1.00000 \r\n[...]\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>I had some free slots in two on my ceph nodes and I used them to set a new SSD only pool. Because the ssd OSDs are sharing the server with an existing SATA pool we have to do some additional steps. There are some good posts out there that explain how to set up [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[156],"tags":[121,184],"_links":{"self":[{"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/posts\/1302"}],"collection":[{"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/comments?post=1302"}],"version-history":[{"count":6,"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/posts\/1302\/revisions"}],"predecessor-version":[{"id":1308,"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/posts\/1302\/revisions\/1308"}],"wp:attachment":[{"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/media?parent=1302"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/categories?post=1302"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/elkano.org\/blog\/wp-json\/wp\/v2\/tags?post=1302"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}