Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions .github/workflows/ci-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,19 @@ jobs:
ports:
- 27017:27017

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"

Comment on lines +43 to +55
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Elasticsearch 서비스 정의: Health-check 조건이 너무 완화되어 있습니다

현재 --health-cmd 는 HTTP 200 응답만 확인하고 있어 클러스터가 red 상태여도 통과할 수 있습니다. 이후 단계(플러그인 설치, 테스트)에서 불안정성을 유발할 가능성이 큽니다. wait_for_status=yellow 옵션을 활용해 최소한 yellow 이상일 때만 Healthy 로 간주하도록 개선해 주세요.

-          --health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
+          --health-cmd "curl -f http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s || exit 1"
+          --health-start-period 10s   # 초기 부팅 시간을 고려

이 변경만으로도 CI 플레이크를 크게 줄일 수 있습니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s || exit 1"
--health-start-period 10s # 초기 부팅 시간을 고려
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
🤖 Prompt for AI Agents
In .github/workflows/ci-test.yml between lines 43 and 55, the Elasticsearch
health check command only verifies an HTTP 200 response, allowing the cluster to
be considered healthy even if its status is red. Update the --health-cmd to use
curl with the parameter wait_for_status=yellow to ensure the cluster is at least
yellow status before marking it healthy. This involves modifying the health
check URL to http://localhost:9200/_cluster/health?wait_for_status=yellow and
keeping the existing failure handling.

permissions:
contents: write
checks: write
Expand Down Expand Up @@ -108,6 +121,23 @@ jobs:
sleep 3
done

- name: Elasticsearch 체크
run: |
until curl -sSf http://localhost:9200/_cluster/health; do
echo "Waiting for Elasticsearch..."
sleep 5
done


Comment on lines +124 to +131
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Elasticsearch 준비 확인 스크립트 보강 및 YAMLlint 경고 해결

  1. 동일하게 wait_for_status=yellow 로 대기하지 않으면 플러그인 설치 직전에 상태가 red 인 경우가 있습니다.
  2. 130번째 줄에 공백(trailing space)이 있어 YAMLlint 에러가 발생했습니다.
-          until curl -sSf http://localhost:9200/_cluster/health; do  
-            echo "Waiting for Elasticsearch..."  
-            sleep 5  
-          done  
+          until curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow\&timeout=1s; do
+            echo "Waiting for Elasticsearch (cluster not yet YELLOW)..."
+            sleep 3
+          done

위 수정으로 안정성과 정적 분석 모두 해결됩니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Elasticsearch 체크
run: |
until curl -sSf http://localhost:9200/_cluster/health; do
echo "Waiting for Elasticsearch..."
sleep 5
done
- name: Elasticsearch 체크
run: |
until curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow\&timeout=1s; do
echo "Waiting for Elasticsearch (cluster not yet YELLOW)..."
sleep 3
done
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 130-130: trailing spaces

(trailing-spaces)

🤖 Prompt for AI Agents
In .github/workflows/ci-test.yml around lines 124 to 131, update the
Elasticsearch readiness check script to wait until the cluster health status is
at least yellow by adding a query parameter like wait_for_status=yellow to the
curl command. Also, remove any trailing whitespace on line 130 to fix the
YAMLlint warning. This will improve the stability of the check and resolve the
linting error.

# Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
Comment on lines +135 to +139
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Nori 플러그인 설치 로직이 버전에 고정되어 있어 깨질 가능성

docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0 로 컨테이너를 찾고 있어
STACK_VERSION 과 다른 버전을 사용할 경우 ID 탐색에 실패 → 플러그인 미설치/실패가 발생합니다.

-CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
+CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION})

또한 찾은 ID가 비어있을 때를 대비한 오류 처리(set -euo pipefail + if-문)도 추가하면 안정성이 높아집니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION})
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
🤖 Prompt for AI Agents
In .github/workflows/ci-test.yml around lines 135 to 139, the Elasticsearch
container ID is fetched using a hardcoded version tag which can cause failures
if STACK_VERSION differs. Replace the fixed version with the dynamic
STACK_VERSION variable to ensure the correct container is targeted.
Additionally, add error handling to check if the container ID is empty before
proceeding, and enable strict shell options like set -euo pipefail to improve
script robustness.


# 2. 빌드
- name: Build with Gradle Wrapper
run: ./gradlew clean build
Expand Down
30 changes: 29 additions & 1 deletion .github/workflows/dev-ci-cd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,19 @@ jobs:
ports:
- 27017:27017

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"

permissions:
contents: write
checks: write
Expand Down Expand Up @@ -64,7 +77,7 @@ jobs:
mkdir -p src/test/resources
echo "${{ secrets.APPLICATION_TEST_YML }}" > ./src/test/resources/application-test.yml

# 체크
# 3-1. 체크
- name: MySQL 체크
run: |
until nc -z localhost 3306; do
Expand All @@ -87,6 +100,21 @@ jobs:
sleep 3
done

- name: Elasticsearch 체크
run: |
until curl -sSf http://localhost:9200/_cluster/health; do
echo "Waiting for Elasticsearch..."
sleep 5
done

# 3-2. Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30

Comment on lines +110 to 118
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Nori 플러그인 설치 단계 – 버전 불일치 및 컨테이너 검색 방식 개선

docker ps -q -f ancestor=...:7.17.0 로 컨테이너를 찾고 있어, 위에서 사용한 이미지 버전과 다르면 플러그인 설치 단계가 실패합니다.

-CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
+CONTAINER_ID=$(docker ps -q -f name=elasticsearch)

또는 동일한 ${STACK_VERSION} 변수를 재사용하세요.

플러그인 설치 → 컨테이너 재시작 흐름은 CI 시간 증가와 불안정을 초래합니다.
Dockerfile 을 커스텀하여 미리 플러그인을 포함하거나, ELASTICSEARCH_PLUGINS 환경변수(8.x 지원)로 설치를 자동화하는 접근을 권장합니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# 3-2. Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
# 3-2. Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f name=elasticsearch)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
🤖 Prompt for AI Agents
In .github/workflows/dev-ci-cd.yml around lines 110 to 118, the Nori plugin
installation step uses a hardcoded Elasticsearch image version which may not
match the version used elsewhere, causing failures. Replace the hardcoded
version with the shared ${STACK_VERSION} variable to ensure consistency.
Additionally, avoid installing the plugin at runtime and restarting the
container as it increases CI time and instability; instead, modify the
Dockerfile to include the plugin during build or use the ELASTICSEARCH_PLUGINS
environment variable for automatic plugin installation if using Elasticsearch
8.x.

# 4. gradle 환경 설치
- name: Gradle Wrapper 권한 부여
Expand Down
3 changes: 3 additions & 0 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,9 @@ dependencies {
// AOP
implementation 'org.springframework.boot:spring-boot-starter-aop'

// ELK
implementation 'org.springframework.boot:spring-boot-starter-data-elasticsearch'

// Test 관련
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
Expand Down
235 changes: 229 additions & 6 deletions compose/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,79 @@
volumes:
certs:
driver: local
esdata01:
driver: local
kibanadata:
driver: local
metricbeatdata01:
driver: local
filebeatdata01:
driver: local
logstashdata01:
driver: local

networks:
backend-bridge:
driver: bridge

services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: zipte-setup
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
networks:
- backend-bridge
Comment on lines +20 to +76
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Setup 서비스의 보안 설정은 우수하지만 개선이 필요합니다.

SSL 인증서 생성과 사용자 비밀번호 설정 로직이 잘 구현되어 있습니다. 하지만 몇 가지 보안 및 운영상 개선점이 있습니다.

  1. 인증서 갱신 처리: 인증서 만료에 대한 처리가 없습니다.
  2. 비밀번호 검증 강화: 현재는 빈 값만 체크하지만 강도 검증이 필요합니다.
  3. 타임아웃 설정: 120초 재시도는 너무 길 수 있습니다.
     healthcheck:
       test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
-      interval: 1s
+      interval: 5s
       timeout: 5s
-      retries: 120
+      retries: 60
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: zipte-setup
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
networks:
- backend-bridge
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: zipte-setup
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
# … (omitted for brevity)
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 5s
timeout: 5s
retries: 60
networks:
- backend-bridge
🤖 Prompt for AI Agents
In compose/docker-compose.yml from lines 20 to 76, improve the setup service by
adding logic to handle SSL certificate renewal before expiration, enhance
password validation to check for strength and complexity rather than just
non-empty values, and reduce the healthcheck retry timeout from 120 seconds to a
shorter, more reasonable duration to avoid long waits during failures.


spring:
image: zipte/server:latest
Expand Down Expand Up @@ -35,9 +110,8 @@ services:
networks:
- backend-bridge

# Nginx Proxy
nginx: # nginx 서비스 정의
image: nginx:latest # 사용할 Docker 이미지
nginx:
image: nginx:latest
container_name: zipte-nginx
volumes:
- ./nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
Expand Down Expand Up @@ -65,6 +139,155 @@ services:
- backend-bridge
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

networks:
backend-bridge:
driver: bridge
es01:
container_name: zipte-es
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
Comment on lines +153 to +154
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

커스텀 엔트리포인트 스크립트 경로를 확인해주세요.

엔트리포인트 스크립트의 경로가 ./es-entrypoint.sh로 되어 있는데, 실제 파일 위치와 일치하는지 확인이 필요합니다.

다음 스크립트로 파일 존재 여부를 확인해보세요:


🏁 Script executed:

#!/bin/bash
# es-entrypoint.sh 파일 위치 확인
fd "es-entrypoint.sh" . --type f

Length of output: 62


엔트리포인트 스크립트 경로 수정 필요

파일 시스템에서 스크립트가 compose/es/es-entrypoint.sh에 위치하므로, compose/docker-compose.ymlvolumes 설정을 아래와 같이 수정하세요.

  • 대상 파일: compose/docker-compose.yml
  • 대상 라인: 153–154

수정 전:

      - ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh

수정 후:

      - ./es/es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
🤖 Prompt for AI Agents
In compose/docker-compose.yml at lines 153 to 154, the volume mapping for the
entrypoint script is incorrect because the script is located in the compose/es
directory. Update the volume path from ./es-entrypoint.sh to
./es/es-entrypoint.sh to correctly map the script into the container.

ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge
Comment on lines +142 to +188
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Elasticsearch 서비스 설정에서 보안 강화가 필요합니다.

SSL 및 보안 설정이 잘 구성되어 있지만, 운영 환경에서 추가 보안 조치가 필요합니다.

다음 개선사항을 고려해보세요:

     environment:
       - node.name=es01
       - cluster.name=${CLUSTER_NAME}
       - discovery.type=single-node
       - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
       - bootstrap.memory_lock=true
       - xpack.security.enabled=true
+      - xpack.security.audit.enabled=true
+      - xpack.security.audit.logfile.events.include=access_denied,access_granted,anonymous_access_denied,authentication_failed,connection_denied,tampered_request
       - xpack.security.http.ssl.enabled=true

또한 메모리 제한 설정도 추가하는 것을 권장합니다:

     ulimits:
       memlock:
         soft: -1
         hard: -1
+    deploy:
+      resources:
+        limits:
+          memory: 2g
+        reservations:
+          memory: 1g
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
es01:
container_name: zipte-es
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge
es01:
container_name: zipte-es
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- xpack.security.audit.logfile.events.include=access_denied,access_granted,anonymous_access_denied,authentication_failed,connection_denied,tampered_request
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 2g
reservations:
memory: 1g
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge
🤖 Prompt for AI Agents
In compose/docker-compose.yml between lines 142 and 188, enhance Elasticsearch
service security by adding memory limit settings under resources to prevent
excessive resource usage in production. Review and tighten SSL configurations if
needed, and consider adding environment variables or Docker secrets for
sensitive data instead of plain environment variables. Also, ensure ulimits and
healthcheck settings are appropriate for production stability and security.


kibana:
container_name: zipte-kibana
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
co.elastic.logs/module: kibana
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge

metricbeat01:
container_name: zipte-metricbeat
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/metricbeat/certs
- metricbeatdata01:/usr/share/metricbeat/data
- "./metricbeat/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"
- "/proc:/hostfs/proc:ro"
- "/:/hostfs:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600
networks:
- backend-bridge

filebeat01:
container_name: zipte-filebeat
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/beats/filebeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/filebeat/certs
- filebeatdata01:/usr/share/filebeat/data
- "./filebeat/filebeat_ingest_data/:/usr/share/filebeat/ingest_data/"
- "./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "/var/lib/docker/containers:/var/lib/docker/containers:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600
networks:
- backend-bridge
Comment on lines +224 to +271
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Beats 서비스의 권한 설정을 검토해주세요.

Metricbeat과 Filebeat가 root 권한으로 실행되고 있습니다. 보안 관점에서 최소 권한 원칙을 적용하는 것이 좋습니다.

   metricbeat01:
     container_name: zipte-metricbeat
     depends_on:
       es01:
         condition: service_healthy
       kibana:
         condition: service_healthy
     image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
-    user: root
+    user: "1000:1000"

root 권한이 꼭 필요한 기능이 있다면 해당 부분만 권한을 제한하는 방법을 고려해보세요.

🤖 Prompt for AI Agents
In compose/docker-compose.yml between lines 224 and 271, Metricbeat and Filebeat
services are configured to run as root user, which violates the principle of
least privilege. To fix this, remove the 'user: root' line from both service
definitions and configure the containers to run under a non-root user with only
the necessary permissions. If root privileges are absolutely required for
specific operations, isolate those operations and limit root access only to
those parts, possibly by adjusting file permissions or using capabilities
instead of running the entire container as root.


logstash01:
container_name: zipte-logstash
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
labels:
co.elastic.logs/module: logstash
user: root
volumes:
- certs:/usr/share/logstash/certs
- logstashdata01:/usr/share/logstash/data
- "./logstash/logstash_ingest_data/:/usr/share/logstash/ingest_data/"
- "./logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"
environment:
- xpack.monitoring.enabled=false
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
12 changes: 12 additions & 0 deletions compose/es/es-entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/bash
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

스크립트 실행 권한 확인 필요

스크립트 파일의 실행 권한이 올바르게 설정되어 있는지 확인이 필요합니다.

다음 스크립트로 파일 권한을 확인해주세요:


🏁 Script executed:

#!/bin/bash
# 스크립트 파일의 권한 확인
ls -la compose/es/es-entrypoint.sh

Length of output: 112


compose/es/es-entrypoint.sh에 실행 권한 (+x) 추가 필요

현재 compose/es/es-entrypoint.sh 파일의 권한이 -rw-r--r--로 설정되어 있어 스크립트를 실행할 수 없습니다. CI/CD 파이프라인이나 직접 실행 시 오류를 방지하려면 실행 권한을 부여해주세요.

  • 파일 위치: compose/es/es-entrypoint.sh
  • 현재 권한:
    -rw-r--r-- 1 jailuser jailuser 359 Jun 28 06:37 compose/es/es-entrypoint.sh
  • 제안 명령어:
    chmod +x compose/es/es-entrypoint.sh
  • 변경 후 권한 예시:
    -rwxr-xr-x 1 jailuser jailuser 359 Jun 28 06:37 compose/es/es-entrypoint.sh
🤖 Prompt for AI Agents
In compose/es/es-entrypoint.sh at line 1, the script file lacks execute
permissions, currently set to -rw-r--r--, which prevents it from running. Fix
this by adding execute permissions using the command chmod +x
compose/es/es-entrypoint.sh so the file becomes executable with permissions like
-rwxr-xr-x.


# 이미 설치되어 있지 않으면 설치
if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
echo "Nori plugin not found, installing..."
elasticsearch-plugin install analysis-nori --batch
else
echo "Nori plugin already installed"
fi
Comment on lines +4 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

플러그인 설치 에러 핸들링 개선 필요

nori 플러그인 설치 시 실패할 경우에 대한 에러 핸들링이 누락되어 있습니다. 네트워크 문제나 권한 문제로 설치가 실패할 경우 컨테이너가 정상적으로 시작되지 않을 수 있습니다.

다음과 같이 에러 핸들링을 추가하는 것을 권장합니다:

 if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
   echo "Nori plugin not found, installing..."
-  elasticsearch-plugin install analysis-nori --batch
+  if ! elasticsearch-plugin install analysis-nori --batch; then
+    echo "Failed to install nori plugin" >&2
+    exit 1
+  fi
+  echo "Nori plugin installed successfully"
 else
   echo "Nori plugin already installed"
 fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
echo "Nori plugin not found, installing..."
elasticsearch-plugin install analysis-nori --batch
else
echo "Nori plugin already installed"
fi
if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
echo "Nori plugin not found, installing..."
if ! elasticsearch-plugin install analysis-nori --batch; then
echo "Failed to install nori plugin" >&2
exit 1
fi
echo "Nori plugin installed successfully"
else
echo "Nori plugin already installed"
fi
🤖 Prompt for AI Agents
In compose/es/es-entrypoint.sh around lines 4 to 9, the script installs the nori
plugin but lacks error handling for installation failures. Modify the script to
check the exit status of the elasticsearch-plugin install command, and if it
fails, output an error message and exit the script with a non-zero status to
prevent the container from starting improperly.


# 원래 entrypoint 실행
exec /bin/tini -- /usr/local/bin/docker-entrypoint.sh "$@"
24 changes: 24 additions & 0 deletions compose/logstash/logstash.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
input {
file {
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
#default is TAIL which assumes more data will come into the file.
#change to mode => "read" if the file is a compelte file. by default, the file will be removed once reading is complete -- backup your files if you need them.
mode => "tail"
path => "/usr/share/logstash/ingest_data/*"
}
}


filter {
}


output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
}
}
Comment on lines +16 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

오류 처리 및 인덱스 관리 개선사항을 고려하세요.

현재 구성에 다음 개선사항을 고려해보세요:

  1. 데드 레터 큐(DLQ) 설정
  2. 인덱스 라이프사이클 관리
  3. 배치 처리 최적화
 output {
   elasticsearch {
     index => "logstash-%{+YYYY.MM.dd}"
     hosts=> "${ELASTIC_HOSTS}"
     user=> "${ELASTIC_USER}"
     password=> "${ELASTIC_PASSWORD}"
     cacert=> "certs/ca/ca.crt"
+    retry_on_conflict => 3
+    retry_initial_interval => 2
+    retry_max_interval => 64
   }
+  # 실패한 이벤트 처리
+  if "_grokparsefailure" in [tags] {
+    file {
+      path => "/usr/share/logstash/failed_events.log"
+    }
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
}
}
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
retry_on_conflict => 3
retry_initial_interval => 2
retry_max_interval => 64
}
# 실패한 이벤트 처리
if "_grokparsefailure" in [tags] {
file {
path => "/usr/share/logstash/failed_events.log"
}
}
}
🤖 Prompt for AI Agents
In compose/logstash/logstash.conf around lines 16 to 24, improve error handling
and index management by adding a dead letter queue (DLQ) configuration to
capture failed events, enable index lifecycle management (ILM) to automate index
rollover and retention policies, and optimize batch processing settings such as
batch size and delay to enhance throughput and resource usage. Update the
elasticsearch output plugin configuration accordingly to include these features.

Loading
Loading