Integrating RADOS Gateway with OpenStack Keystone on CentOS 7

DPBD90
6 min readJan 22, 2020

It is possible to integrate the Ceph Object Gateway with Keystone, the OpenStack identity service. This sets up the gateway to accept Keystone as the users authority. A user that Keystone authorizes to access the gateway will also be automatically created on the Ceph Object Gateway (if didn’t exist beforehand). A token that Keystone validates will be considered as valid by the gateway.

Install Keystone

Before you install and configure the Identity service, you must create a database.

Prerequisites

  1. Use the database access client to connect to the database server as the root user:
$ mysql -u root -p

2. Create the keystone database:

MariaDB [(none)]> CREATE DATABASE keystone;

3.Grant proper access to the keystone database:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

4.Exit the database access client.

Install and configure components

  1. Run the following command to install the packages:
# yum install openstack-keystone httpd mod_wsgi

2. Edit the /etc/keystone/keystone.conf file and complete the following actions:

In the [database] section, configure database access:

[database]
# ...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

3. In the [token] section, configure the Fernet token provider:

[token]
# ...
provider = fernet

4. Populate the Identity service database:

# su -s /bin/sh -c "keystone-manage db_sync" keystone

5. Initialize Fernet key repositories:

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

6.Bootstrap the Identity service:

# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://<keystone_host>:5000/v3/ \
--bootstrap-internal-url http://<keystone_host>:5000/v3/ \
--bootstrap-public-url http://<keystone_host>:5000/v3/ \
--bootstrap-region-id RegionOne

Configure the Apache HTTP server

1. Edit the /etc/httpd/conf/httpd.conf file and configure the ServerName option to reference the controller node:

ServerName controller

2. Create a link to the /usr/share/keystone/wsgi-keystone.conf file:

# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

Finalize the installation

  1. Start the Apache HTTP service and configure it to start when the system boots:
# systemctl enable httpd.service
# systemctl start httpd.service

2. Configure the administrative account

$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://<keystone_host>:5000/v3
$ export OS_IDENTITY_API_VERSION=3

Install Ceph with Ceph-deploy

This setup requires at least 3 nodes.

  1. On Red Hat Enterprise Linux 7, register the target machine with subscription-manager, verify your subscriptions, and enable the “Extras” repository for package dependencies. For example:
subscription-manager repos --enable=rhel-7-server-extras-rpms

2. Install and enable the Extra Packages for Enterprise Linux (EPEL) repository:

sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

3. Add the Ceph repository to your yum configuration file at /etc/yum.repos.d/ceph.repo with the following command. Replace {ceph-stable-release} with a stable Ceph release (e.g., luminous.) For example:

cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM

4. You may need to install python setuptools required by ceph-deploy:

sudo yum install python-setuptools

5. Update your repository and install ceph-deploy:

sudo yum update
sudo yum install ceph-deploy

Ceph Node Setup

Install Ntp

sudo yum install ntp ntpdate ntp-doc

CREATE A CEPH DEPLOY USER

The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

Recent versions of ceph-deploy support a --username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy --username {username}, the user you specify must have password-less SSH access to the Ceph node, as ceph-deploy will not prompt you for a password.

  1. Create a new user on each Ceph Node.
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}

2. For the new user you added to each Ceph node, ensure that the user has sudo privileges.

echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}

SELINUX

On CentOS and RHEL, SELinux is set to Enforcing by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:

sudo setenforce 0

Configure DNS and host file

172.16.0.1 controller
172.16.0.1 node1
172.16.0.1 node2
172.16.0.1 node3

Install Ceph Cluster

In this tutorial, we create a Ceph Storage Cluster with one Ceph Monitor and three Ceph OSD Daemons. Once the cluster reaches a active + clean state, expand it by adding a fourth Ceph OSD Daemon, and two more Ceph Monitors. For best results, create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster.

mkdir <ceph_meta_folder>
cd <ceph_meta_folder>

STARTING OVER

If at any point you run into trouble and you want to start over, execute the following to purge the Ceph packages, and erase all its data and configuration:

ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

CREATE A CLUSTER

  1. Create the cluster.
ceph-deploy new node1 node2 node3

2. Install Ceph packages

ceph-deploy install node1 node2 node3

3. Deploy the initial monitor(s) and gather the keys:

ceph-deploy mon create-initial

4. Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin node1 node2 node3

5. Deploy a manager daemon. (Required only for luminous+ builds):

ceph-deploy mgr create node1

6. Add three OSDs. For the purposes of these instructions, we assume you have an unused disk in each node called /dev/vdb. Be sure that the device is not currently in use and does not contain any important data.

ceph-deploy osd create --data /dev/vdb node1
ceph-deploy osd create --data /dev/vdb node2
ceph-deploy osd create --data /dev/vdb node3

7. Check your cluster’s health.

ssh node1 sudo ceph -s

ADD AN RGW INSTANCE

ceph-deploy rgw create node1

By default, the RGW instance will listen on port 7480. This can be changed by editing ceph.conf on the node running the RGW as follows:

[client]
rgw frontends = civetweb port=80

STORING/RETRIEVING OBJECT DATA

echo {Test-data} > testfile.txt
ceph osd pool create mytest
rados put {object-name} {file-path} --pool=mytest
rados put test-object-1 testfile.txt --pool=mytest

Configure Ceph to authenticate with Keystone

  1. Create Keystone user and endpoint for Ceph
openstack user create --domain default --password-prompt swift
openstack role add --project service --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store
openstack endpoint create --region RegionOne object-store public http://<keystone_host>:8080/v1/AUTH_%\(project_id\)s
openstack endpoint create --region RegionOne object-store internal http://<keystone_host>:8080/v1/AUTH_%\(project_id\)s
openstack endpoint create --region RegionOne object-store admin http://<keystone_host>:8080/v1
keystone-manage bootstrap --bootstrap-password bigdata \
--bootstrap-admin-url http://<keystone_host>:5000/v3/ \
--bootstrap-internal-url http://<keystone_host>:5000/v3/ \
--bootstrap-public-url http://<keystone_host>:5000/v3/ \
--bootstrap-region-id RegionOne

2. Configure Rados Gateway to use keystone for authentication

Edit /etc/ceph/ceph.

[client.radosgw.gateway]
rgw keystone verify ssl = false
rgw_keystone_api_version = 3
rgw_keystone_url = http://<keystone_host>:5000
rgw_keystone_admin_user = swift
rgw_keystone_admin_password = bigdata
rgw_keystone_admin_tenant = Default
rgw_keystone_admin_domain = Default
rgw_keystone_admin_project = service
rgw_s3_auth_use_keystone = true
#nss db path = /var/ceph/nss
rgw swift account in url = true

3. Restart Radosgw

systemctl restart radosgw-client.target

Verify installation

  1. Generate new credentials
openstack ec2 credentials create
+------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| access | d1f46cdb78964d718638dca3162c8bdf |
| links | {u'self': u'http://<keystone_host>:5000/v3/users/7ca305562a7846b7839bd78e2e7cfc53/credentials/OS-EC2/d1f46cdb78964d718638dca3162c8bdf'} |
| project_id | a1f38eb846904a768330aeb74b491591 |
| secret | 0c7571255d474df4ba5e282ef592ee08 |
| trust_id | None |
| user_id | 7ca305562a7846b7839bd78e2e7cfc53 |
+------------+---------------------------------------------------------------------------------------------------------------------------------------------+

2. Use access and secret key for authenticating with Ceph.

[centos@ceph1 centos]# aws2 configure
AWS Access Key ID [****************d673]: d1f46cdb78964d718638dca3162c8bdf
AWS Secret Access Key [****************6a69]: 0c7571255d474df4ba5e282ef592ee08
Default region name [Default]:
Default output format [None]:

3. Access RadosGW with provided credentials:

aws2 --endpoint=http://node1  s3 ls s3://
2020-01-19 01:30:16 admin
2020-01-17 15:41:37 data02
2020-01-17 15:52:35 data03
2020-01-20 14:28:52 data04

Fix Aws Java SDK error.

com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 501; Error Code: NotImplemented; Request ID: tx00000000000000000000b-005e2556db-113e-default; S3 Extended Request ID: 113e-default-default), S3 Extended Request ID: 113e-default-default
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)

To fix this issue, we must to set signer to “S3SignerType”:

ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
clientConfig.setSocketTimeout(3000);
clientConfig.withSignerOverride("S3SignerType");

References:

--

--

DPBD90

I'm an engineer. I love to work on data and open-source systems.