Skip to content

Bringing in changes made by DSO members. #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,10 @@
s3_iam_role_complete.json
### Intellij ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839

## Directory-based project format:
.idea/
*.iml
### Intellij ###

s3_iam_role_complete.json
48 changes: 26 additions & 22 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,33 +1,37 @@
FROM upmcenterprises/fluentd:v0.14-debian
MAINTAINER Steve Sloka <[email protected]>

COPY /start.sh /home/fluent/start.sh
FROM fluent/fluentd:v0.12-debian
MAINTAINER Ken Howard <[email protected]>

ENV PATH /home/fluent/.gem/ruby/2.3.0/bin:$PATH

# !! root access required to read container logs !!
USER root

RUN buildDeps="sudo make gcc g++ libc-dev ruby-dev" \
&& apt-get update \
&& apt-get install -y --no-install-recommends $buildDeps \
&& sudo -u fluent gem install \
fluent-plugin-elasticsearch \
fluent-plugin-s3 \
fluent-plugin-systemd \
fluent-plugin-kubernetes_metadata_filter \
fluent-plugin-rewrite-tag-filter \
RUN buildDeps="sudo make gcc g++ libc-dev ruby-dev libffi-dev" \
&& apt-get update \
&& apt-get install -y --no-install-recommends $buildDeps \
&& echo 'gem: --no-document' >> /etc/gemrc \
&& gem install \
ffi \
fluent-plugin-cloudwatch-logs \
&& sudo -u fluent gem sources --clear-all \
&& SUDO_FORCE_REMOVE=yes \
apt-get purge -y --auto-remove \
-o APT::AutoRemove::RecommendsImportant=false \
$buildDeps \
&& rm -rf /var/lib/apt/lists/* \
/home/fluent/.gem/ruby/2.3.0/cache/*.gem
fluent-plugin-elasticsearch:1.10.0 \
fluent-plugin-kubernetes_metadata_filter \
fluent-plugin-record-reformer \
fluent-plugin-rewrite-tag-filter:1.5.6 \
fluent-plugin-s3:'~> 0.8' \
fluent-plugin-secure-forward \
fluent-plugin-systemd:0.0.8 \
&& gem sources --clear-all \
&& SUDO_FORCE_REMOVE=yes \
apt-get purge -y --auto-remove \
-o APT::AutoRemove::RecommendsImportant=false $buildDeps \
&& rm -rf \
/tmp/* \
/usr/lib/ruby/gems/*/cache/*.gem \
/var/lib/apt/lists/* \
/var/tmp/*

# Copy plugins
COPY plugins /fluentd/plugins/
COPY /start.sh /home/fluent/start.sh


ENTRYPOINT ["sh", "start.sh"]
ENTRYPOINT ["bin/sh", "/home/fluent/start.sh"]
94 changes: 63 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,89 @@
# kubernetes-fluentd

Kubernetes Logger is designed to take all of the logs from your containers and system and forward them to a central location. Today this can be a S3 bucket in AWS or a ElasticSearch cluster (or both). The logger is intended to be a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) which will run a pod on each Node in your cluster. Fluentd is the forwarder agent which has a configuration file to configure the different output destinations for your logs.

Currently the sample container has both the S3 plugin as well as the Elasticsearch plugin installed by default. Please customize the `fluent.conf` to your satisfaction to enable or disable one or the other.
Kubernetes Logger is designed to take all of the logs from your containers and system and forward them to a central location. Today this can be a S3 bucket in AWS or a ElasticSearch cluster (or both). The logger is intended to be a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) which will run a pod on each Node in your cluster. Fluentd is the forwarder agent which has a configuration file to configure the different output destinations for your logs.

## Deployment

Deployment to a Kubernetes cluster is maintained by a DaemonSet. The DaemonSet requires a ConfigMap to pass parameters to the DaemonSet.
Deployment to a Kubernetes cluster is maintained by a DaemonSet. The DaemonSet requires a ConfigMap to pass parameters to the DaemonSet.

1. Create bucket in S3 changing the value of 'steve_bucket' to your unique bucket name:
01. Create bucket in S3 changing the value of 'test_bucket' to your unique bucket name:
```
$ aws s3api create-bucket --bucket steve_bucket --region us-east-1
$ aws s3api create-bucket --profile <My_Profile> --region us-east-1 --bucket test_bucket
```

2. Edit the file 's3_iam_role.json' and update the value '{YOUR_BUCKET}' with the value of the bucket created in the previous step. In the following example my bucket name is 'steve_bucket':
02. Edit the file 's3_iam_role.json' and update the value '{YOUR_BUCKET}' with the value of the bucket created in the previous step. In the following example my bucket name is 'test_bucket':
```
$ sed 's/{YOUR_BUCKET}/steve_bucket/g' s3_iam_role.json > s3_iam_role_complete.json
$ sed 's/{YOUR_BUCKET}/test_bucket/g' s3_iam_role.json > s3_iam_role_complete.json
```

3. Create an IAM policy to allow access to the S3 bucket:
03. Create an IAM policy to allow access to the S3 bucket:
```
$ aws iam create-policy --policy-name kubernetes-fluentd-s3-logging --policy-document file://s3_iam_role_complete.json
$ aws iam create-policy --profile <My_Profile> --policy-name kubernetes-fluentd-s3-logging --policy-document file://s3_iam_role_complete.json
```

4. Attach the policy to the IAM Role for the Kubernetes workers:
04. Attach the policy to the IAM Role for the Kubernetes workers:
```
# Find RoleName for the worker role
$ aws iam list-roles | grep -i iamroleworker
$ aws iam list-roles --profile <My_Profile> | grep -i iamroleworker

# Attach policy
$ aws iam attach-role-policy --policy-arn <ARN_of_policy_created_in_previous_step> --role-name <RoleName>
Create the ConfigMap specifying the correct values for your environment:
$ kubectl create configmap fluentd-conf --from-literal=AWS_S3_BUCKET_NAME=<!YOUR_BUCKET_NAME!> --from-literal=AWS_S3_LOGS_BUCKET_PREFIX=<!YOUR_BUCKET_PREFIX!> --from-literal=AWS_S3_LOGS_BUCKET_PREFIX_KUBESYSTEM=<!YOUR_BUCKET_PREFIX!> --from-literal=AWS_S3_LOGS_BUCKET_REGION=<!YOUR_BUCKET_REGION!> --from-file=fluent_s3.conf -n kube-system
$ aws iam attach-role-policy --profile <My_Profile> --policy-arn <ARN_of_policy_created_in_previous_step> --role-name <RoleName>
```

05. Create the ConfigMap specifying the correct values for your environment:

We need to create a few environment variables that will hold some values used to create the config map.

| Variable | Description
| ------------------------------------ |-------------|
| AWS_S3_BUCKET_NAME | Name of the S3 bucket (e.g. k8s-logs)
| AWS_S3_LOGS_BUCKET_PREFIX | The prefix to place the application logs into (e.g. k8s-logs/neutrino-kamioka-stg-logs/)
| AWS_S3_LOGS_BUCKET_PREFIX_KUBESYSTEM | The prefix to place the system logs into (e.g. k8s-logs/neutrino-kamioka-stg-kubesystem-logs/)
| AWS_S3_LOGS_BUCKET_REGION | AWS Region. (e.g. us-east-1)

```
$ kubectl -n kube-system create configmap fluentd-conf \
--from-literal=AWS_S3_BUCKET_NAME=<YOUR_BUCKET_NAME> \
--from-literal=AWS_S3_LOGS_BUCKET_PREFIX=<YOUR_BUCKET_PREFIX> \
--from-literal=AWS_S3_LOGS_BUCKET_PREFIX_KUBESYSTEM=<YOUR_BUCKET_PREFIX> \
--from-literal=AWS_S3_LOGS_BUCKET_REGION=<YOUR_BUCKET_REGION> \
--from-file=./conf/fluent_s3.conf
```

06. Deploy the daemonset:
```

| Variable | Description
| ------------- |-------------|
| AWS_S3_BUCKET_NAME | Name of the S3 bucket
| AWS_S3_LOGS_BUCKET_PREFIX | The prefix to place the application logs into (e.g. k8s-logs/logs/)
| AWS_S3_LOGS_BUCKET_PREFIX_KUBESYSTEM | The prefix to place the system logs into (e.g. k8s-logs/kubesystem-logs/)
| AWS_S3_LOGS_BUCKET_REGION | AWS Region
$ kubectl -n kube-system create -f ./k8s/fluentd_s3.yaml
```

07. Verify logs are writing to S3:
```
$ aws s3api list-objects --profile <My_Profile> --bucket test_bucket
```

## Undeployment

Help out by documenting me!!!

5. Deploy the daemonset:
01. Remove DaemonSet
```
$ kubectl create -f https://raw.githubusercontent.com/upmc-enterprises/kubernetes-fluentd/master/fluentd_s3.yaml
$ kubectl -n kube-system delete -f ./k8s/fluentd_s3.yaml
```

6. Verify logs are writing to S3:
02. Delete ConfigMap
```
$ aws s3api list-objects --bucket steve_bucket
$ kubectl -n kube-system delete configmap fluentd-conf
```

# Deploy ELK Stack
03.
04.
05.
06.
07.




## Deploy ELK Stack

For a simple ELK stack to get started, deploy the following:

Expand All @@ -62,8 +93,9 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml
```
_Source: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch_
Source: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch


# About
## About

Built by UPMC Enterprises in Pittsburgh, PA. http://enterprises.upmc.com/
Built by UPMC Enterprises in Pittsburgh, PA. http://enterprises.upmc.com/
File renamed without changes.
Loading