Skip to content
This repository was archived by the owner on Mar 22, 2018. It is now read-only.
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
106 changes: 69 additions & 37 deletions eap-latest-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,12 @@ master node (or you could choose to use a separate server).

1. Install EPEL repo and then ansible

You may be able to skip this step as *Ansible may already be
available on your master*. We now ship Ansible packages in the
`rhel-7-server-*` subscriber channels. You can run a quick `yum
info ansible` to check. Here's how to enable EPEL if it is still
required for you:

```
yum -y install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
Expand Down Expand Up @@ -341,9 +347,16 @@ them according to your DNS environment.
ae-node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
```

**Users in AWS** may wish (need) to add and set the
`openshift_hostname` and `openshift_public_hostname` variables to the
instances Public DNS names from Amazon for each host in the `[nodes]`
section.


For now do not worry much about the information after
`openshift_node_labels=`. But do not omit it entirely.


### Run the installer (on the master)

1. Run ansible to set up the cluster
Expand All @@ -365,32 +378,19 @@ For now do not worry much about the information after

1. Run oc get nodes (your cluster should be running!)
```
oc get nodes
oc get nodes --show-labels
```

You should see something like:
```
-----------------------------------------------------------------------------
NAME LABELS STATUS
ae-master.example.com kubernetes.io/hostname=ae-master.example.com Ready
ae-node1.example.com kubernetes.io/hostname=ae-node1.example.com Ready
ae-node2.example.com kubernetes.io/hostname=ae-node2.example.com Ready
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
NAME STATUS AGE LABELS
ae-master.example.com Ready 1h kubernetes.io/hostname=ae-master.example.com,region=infra,zone=default
ae-node1.example.com Ready 1h kubernetes.io/hostname=ae-node1.example.com,region=primary,zone=east
ae-node2.example.com Ready 1h kubernetes.io/hostname=ae-node2.example.com,region=primary,zone=west
-----------------------------------------------------------------------------------------------------------------
```

<!--
[//]: # (TODO: remove this once the issue is resolved and update above `get nodes` output)

1. There's a [bug](https://github.com/openshift/openshift-ansible/issues/305)
in current ansible installer preventing labels to be set. For now, let's set
them manually:
```
oc label nodes ae-master.example.com region=infra zone=default
oc label nodes ae-node1.example.com region=primary zone=east
oc label nodes ae-node2.example.com region=primary zone=west
```
You should see them assigned in the output of the next `oc get nodes`.
-->

## Launch your very first pod

Expand Down Expand Up @@ -704,11 +704,15 @@ possibilities are endless!
The assignments of "regions" and "zones" at the node-level are handled by labels
on the nodes. You can look at how the labels were implemented by doing:

oc get nodes
NAME LABELS STATUS
ae-master.example.com kubernetes.io/hostname=ae-master.example.com,region=infra,zone=default Ready
ae-node1.example.com kubernetes.io/hostname=ae-node1.example.com,region=primary,zone=east Ready
ae-node2.example.com kubernetes.io/hostname=ae-node2.example.com,region=primary,zone=west Ready

# oc get nodes --show-labels
-----------------------------------------------------------------------------------------------------------------
NAME STATUS AGE LABELS
ae-master.example.com Ready 1h kubernetes.io/hostname=ae-master.example.com,region=infra,zone=default
ae-node1.example.com Ready 1h kubernetes.io/hostname=ae-node1.example.com,region=primary,zone=east
ae-node2.example.com Ready 1h kubernetes.io/hostname=ae-node2.example.com,region=primary,zone=west
-----------------------------------------------------------------------------------------------------------------


At this point we have a running AE environment across three hosts, with
one master and three nodes, divided up into two regions -- "*infra*structure"
Expand Down Expand Up @@ -756,8 +760,10 @@ will create user accounts for two non-privileged users of AE, *joe* and
useradd joe
useradd alice

To start, we will need the `htpasswd` binary, which is made available by
installing:
To start, we will need the `htpasswd` binary. Including an
`htpasswd_auth` provider in the `openshift_master_identity_providers`
parameter in your inventory file will install this binary by default
(more notes below). If you need to install it manually:

yum -y install httpd-tools

Expand All @@ -784,6 +790,19 @@ More information on these configuration settings (and other identity providers)

http://docs.openshift.org/latest/admin_guide/configuring_authentication.html#HTPasswdPasswordIdentityProvider


Configuring `htpasswd_auth` would have been handled automatically for
*joe* and *alice* if we had added values like the following to the
`[OSEv3:vars]` section of our inventory file:


[OSEv3:vars]

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'joe': '$apr1$TbzjxAFi$P7zyfMd6CrVxZweYlJIYh0', 'alice': '$apr1$V3BkGf/h$sgY5w9aJ6mHDHozSnwGL9.'}

Which effectively handles running the steps described above for you.

### A Project for Everything
Atomic Enterprise (AE) has a concept of "projects" to contain a number of
different resources:
Expand Down Expand Up @@ -935,6 +954,14 @@ Since we have taken the time to create the *joe* user as well as a project for
him, we can log into a terminal as *joe* and then set up the command line
tooling.


**Warning: possible bug** - As of the Atomic 3.2 release line the
**master** CA cert file (`/etc/origin/master/ca.crt`) may *not be
readable* by non root users, as the parent directory is installed
with `0700` permission. You may need to run `chmod +rx
/etc/origin/master/` (or copy the cert elsewhere) to enable `joe` to
login successfully.

Open a terminal as `joe`:

# su - joe
Expand Down Expand Up @@ -1074,7 +1101,7 @@ several other `ose-pod` containers.

```
# docker ps
abd9061cf2fd atomicenterprise/hello-atomic "/hello-atomic" 19 minutes ago Up 19 minutes k8s_hello-atomic.65804d4d_hello-atomic_demo_71c467dc-4db3-11e5-a843-fa163e472414_96731d91
abd9061cf2fd atomicenterprise/hello-atomic "/hello-atomic" 19 minutes ago Up 19 minutes k8s_hello-atomic.65804d4d_hello-atomic_demo_71c467dc-4db3-11e5-a843-fa163e472414_96731d91
e8af42b67175 aos3/aos-pod:v3.0.1.100 "/pod" 19 minutes ago Up 19 minutes k8s_POD.2d6df2fa_hello-atomic_demo_71c467dc-4db3-11e5-a843-fa163e472414_b47f195f
```

Expand Down Expand Up @@ -1140,7 +1167,7 @@ If you think back to the simple pod we created earlier, there was a "label":
Now, let's look at a *service* definition:

```
$ cat hello-service.json
$ cat hello-service.json
{
"kind": "Service",
"apiVersion": "v1",
Expand Down Expand Up @@ -1275,6 +1302,10 @@ Make sure you remember where you put this PEM file.

### Creating the Router

The `openshift-ansible` playbooks automatically creates a router
component for new installations. Manual router creation steps are
still described below for reference.

The router is the ingress point for all traffic destined for AE
services. It currently supports only HTTP(S) traffic (and "any"
TLS-enabled traffic via SNI). While it is called a "router", it is essentially a
Expand Down Expand Up @@ -1347,7 +1378,7 @@ admins.

Ensure that port 1936 is accessible and visit:

http://admin:cEVu2hUb@ae-master.example.com:1936
http://admin:cEVu2hUb@ae-master.example.com:1936

to view your router stats.

Expand Down Expand Up @@ -1464,7 +1495,9 @@ Docker container is on the master. Log in there as `root`.
We can use `oc exec` to get a bash interactive shell inside the running
router container. The following command will do that for us:

oc exec -it -p $(oc get pods | grep router | awk '{print $1}' | head -n 1) /bin/bash
oc exec -it $(oc get pods | grep router | awk '{print $1}' | head -n 1) /bin/bash

> **Note:** Earlier versions required a `-p` option before the `$()` subshell. Including `-p` in new releases will result in deprecation warning `W0714`

You are now in a bash session *inside* the container running the router.

Expand Down Expand Up @@ -1756,7 +1789,7 @@ wildcard space:
;; ANSWER SECTION:
foo.cloudapps.example.com 0 IN A 192.168.133.2
...

[//]: # (TODO: LDAP basic auth service requires STI - find a way around it)

# APPENDIX - Import/Export of Docker Images (Disconnected Use)
Expand Down Expand Up @@ -2051,19 +2084,19 @@ configure the entire AWS environment, too.
[OSEv3:children]
masters
nodes

[OSEv3:vars]
deployment_type=enterprise

# The default user for the image used
ansible_ssh_user=ec2-user

# host group for masters
# The entries should be either the publicly accessible dns name for the host
# or the publicly accessible IP address of the host.
[masters]
ec2-52-6-179-239.compute-1.amazonaws.com

# host group for nodes
[nodes]
ec2-52-6-179-239.compute-1.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" #The master
Expand Down Expand Up @@ -2214,4 +2247,3 @@ The CA is created on your master in `/var/lib/openshift/openshift.local.certific
On Mac OSX and Linux you will need to make the file executable

chmod +x oc