Skip to content

Conversation

@doup2001
Copy link
Contributor

@doup2001 doup2001 commented Jun 28, 2025

📌 작업한 내용

Elasticsearch 및 연관 서비스(ES, Kibana, Metricbeat, Filebeat, Logstash) HTTPS(SSL) 환경 구성
setup 서비스에서 CA 및 서버 인증서 자동 생성 및 배포
각 서비스에 인증서 볼륨(certs) 마운트 및 SSL 관련 환경변수, 설정 적용
모든 서비스가 HTTPS로 안전하게 통신하도록 구성
Spring 등 외부 애플리케이션 연동 시 SSL 인증서 및 인증 정보 적용
로컬 개발 편의를 위한 HTTP 환경 전환
보안 관련 서비스(setup) 및 인증서 볼륨/환경변수/설정 전체 제거
xpack.security, SSL, 인증 관련 옵션 비활성화
모든 서비스가 HTTP로 통신하도록 변경
불필요한 설정 및 라벨, 메모리 제한 등 정리
볼륨(esdata01, kibanadata 등)은 데이터 영속성을 위해 유지
docker-compose.yml 및 .env 파일 구조 정리

🔍 참고 사항

HTTPS/SSL 환경은 운영 또는 보안 테스트 용도로 활용
인증서, 비밀번호 등 민감 정보 관리에 주의 필요
HTTP 환경은 로컬 개발/테스트 용도로만 사용
보안이 비활성화되어 있으므로 외부 노출 금지
Spring 등 연동 애플리케이션에서 SSL 설정/해제에 따라 접속 방식 변경 필요

🖼️ 스크린샷

스크린샷 2025-06-28 15 28 35 스크린샷 2025-06-28 15 28 26

🔗 관련 이슈

✅ 체크리스트

  • 로컬에서 빌드 및 테스트 완료
  • 코드 리뷰 반영 완료
  • 문서화 필요 여부 확인

Summary by CodeRabbit

New Features

  • Elasticsearch(ELK) 기반의 질문 키워드 검색 기능이 추가되었습니다.
  • 새로운 질문 검색 API(/api/v1/question) 엔드포인트가 제공됩니다.
  • Kibana, Logstash, Metricbeat, Filebeat 등 Elastic Stack 환경이 Docker Compose로 통합 지원됩니다.

Bug Fixes

  • 없음

Chores

  • Elasticsearch 및 관련 의존성, 설정 파일, 매핑/세팅 JSON, 엔트리포인트 스크립트 등이 추가되었습니다.
  • CI 환경에 Elasticsearch 서비스 및 Nori 분석기 플러그인 설치가 포함되었습니다.

@doup2001 doup2001 self-assigned this Jun 28, 2025
@doup2001 doup2001 added 📬 API 서버 API 통신 ✨ Feature 기능 개발 🌏 Deploy 배포 관련 ⚙ Setting 개발 환경 세팅 labels Jun 28, 2025
@coderabbitai
Copy link

coderabbitai bot commented Jun 28, 2025

"""

Walkthrough

Elasticsearch 및 ELK(Elasticsearch, Logstash, Kibana) 스택이 프로젝트에 통합되었습니다. Spring Boot의 Elasticsearch 스타터 의존성이 추가되고, 도커 컴포즈로 Elastic Stack 전체 환경이 구축되었습니다. Java 코드에는 Elasticsearch 문서 매핑, 저장소, 설정 클래스, 서비스 계층의 키워드 기반 검색 기능이 추가되었습니다. Logstash 파이프라인, Elasticsearch 인덱스 매핑/설정 파일, 커스텀 엔트리포인트 스크립트 등도 새로 도입되었습니다.

Changes

파일/경로 그룹 변경 요약
build.gradle Spring Boot Elasticsearch 스타터 의존성 추가
compose/docker-compose.yml ELK 스택(Elasticsearch, Kibana, Logstash, Metricbeat, Filebeat) 및 인증서/볼륨/네트워크 구성 추가
compose/es/es-entrypoint.sh Elasticsearch용 Nori 플러그인 자동설치 엔트리포인트 스크립트 추가
compose/logstash/logstash.conf Logstash 파이프라인(파일 입력→필터 없음→Elasticsearch 출력) 설정 추가
src/main/resources/elasticsearch/question-mapping.json
.../question-setting.json
질문 인덱스용 Elasticsearch 매핑 및 Nori 기반 한글 분석기 설정 파일 추가
src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java
ElasticSearchDevConfig.java
Spring 프로필별 Elasticsearch 연결 설정 클래스 추가
src/main/java/com/zipte/platform/server/adapter/out/external/elk/community/QuestionELKDocument.java
QuestionELKRepository.java
질문 도메인용 Elasticsearch 문서/저장소 클래스 및 쿼리 메서드 추가
src/main/java/com/zipte/platform/server/adapter/out/QuestionPersistenceAdapter.java JPA + ELK 동시 저장/삭제, 키워드 기반 검색 메서드 추가
src/main/java/com/zipte/platform/server/application/in/community/QuestionUseCase.java
.../out/community/QuestionPort.java
키워드 기반 질문 검색 메서드(페이지/리스트) 인터페이스에 추가
src/main/java/com/zipte/platform/server/application/service/QuestionService.java 키워드 기반 질문 검색 서비스 메서드 2종 추가(1개는 구현, 1개는 스텁)
src/main/java/com/zipte/platform/server/adapter/in/web/QaApi.java
.../swagger/QaApiSpec.java
키워드 기반 질문 검색 API 엔드포인트 및 Swagger 명세 추가
src/main/java/com/zipte/platform/server/adapter/in/web/dto/response/QuestionResponse.java 도메인 리스트→DTO 리스트 변환용 static 메서드 추가

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant QaApi
    participant QuestionService
    participant QuestionPort
    participant QuestionPersistenceAdapter
    participant ELKRepository
    participant JPARepository

    Client->>QaApi: GET /api/v1/question?kaptCode&title
    QaApi->>QuestionService: loadQuestionsByKeyword(kaptCode, title)
    QuestionService->>QuestionPort: loadQuestionsByKeyword(kaptCode, title)
    QuestionPort->>QuestionPersistenceAdapter: loadQuestionsByKeyword(kaptCode, title)
    QuestionPersistenceAdapter->>ELKRepository: findByTitleAndKaptCode(title, kaptCode)
    ELKRepository-->>QuestionPersistenceAdapter: List<QuestionELKDocument>
    QuestionPersistenceAdapter-->>QuestionPort: List<Question>
    QuestionPort-->>QuestionService: List<Question>
    QuestionService-->>QaApi: List<Question>
    QaApi->>Client: ApiResponse<List<QuestionResponse>>
Loading
sequenceDiagram
    participant App
    participant QuestionPersistenceAdapter
    participant JPARepository
    participant ELKRepository

    App->>QuestionPersistenceAdapter: save(Question)
    QuestionPersistenceAdapter->>JPARepository: save(question)
    JPARepository-->>QuestionPersistenceAdapter: QuestionEntity
    QuestionPersistenceAdapter->>ELKRepository: save(QuestionELKDocument.from(question))
    ELKRepository-->>QuestionPersistenceAdapter: QuestionELKDocument
    QuestionPersistenceAdapter-->>App: Question
Loading

Poem

🐰
새로운 바람, 엘라스틱의 힘,
질문을 찾아 한글도 척척!
도커와 로그스태시, 키바나도 춤추네,
키워드로 질문을 쏙쏙 뽑아,
토끼는 오늘도 귀를 쫑긋 세우네.
ELK와 함께라면,
데이터의 숲도 두렵지 않아요!

"""


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d197ea8 and 9c4c640.

📒 Files selected for processing (2)
  • .github/workflows/ci-test.yml (2 hunks)
  • .github/workflows/dev-ci-cd.yml (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/dev-ci-cd.yml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/ci-test.yml

[error] 130-130: trailing spaces

(trailing-spaces)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link

github-actions bot commented Jun 28, 2025

Test Results

 52 files  ±0   52 suites  ±0   6s ⏱️ -1s
122 tests ±0  122 ✅ ±0  0 💤 ±0  0 ❌ ±0 
123 runs  ±0  123 ✅ ±0  0 💤 ±0  0 ❌ ±0 

Results for commit 9c4c640. ± Comparison against base commit 17623ce.

♻️ This comment has been updated with latest results.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

🧹 Nitpick comments (9)
src/main/resources/elasticsearch/question-setting.json (1)

1-10: 한국어 분석기 설정 승인 및 개선 제안

기본적인 nori 분석기 설정이 올바르게 구성되어 있습니다. 한국어 텍스트 분석을 위한 기본 기능을 제공합니다.

향후 더 정교한 한국어 분석을 위해 다음과 같은 개선을 고려해보세요:

 {
   "analysis": {
     "analyzer": {
       "korean": {
-        "type": "nori"
+        "tokenizer": "nori_tokenizer",
+        "filter": ["lowercase", "nori_part_of_speech"]
+      }
+    },
+    "tokenizer": {
+      "nori_tokenizer": {
+        "type": "nori_tokenizer",
+        "decompound_mode": "mixed"
       }
     }
   }
 }
src/main/java/com/zipte/platform/server/adapter/in/web/dto/response/QuestionResponse.java (1)

6-6: 와일드카드 import 사용 지양

java.util.* 와일드카드 import는 불필요한 클래스들을 가져와 네임스페이스를 오염시킬 수 있습니다. 필요한 클래스만 명시적으로 import하는 것이 좋습니다.

다음과 같이 수정하는 것을 권장합니다:

-import java.util.*;
+import java.util.List;
src/main/java/com/zipte/platform/server/adapter/in/web/swagger/QaApiSpec.java (1)

47-59: 키워드 검색 API 스펙 승인 및 메서드명 고려사항

Elasticsearch를 활용한 키워드 검색 API 스펙이 올바르게 정의되어 있습니다. API 문서화도 적절히 구성되어 있습니다.

다만 기존 getQuestions 메서드와 동일한 이름을 사용하고 있어 혼란을 야기할 수 있습니다. 향후 더 명확한 메서드명 사용을 고려해보세요:

예: searchQuestionsByKeyword, findQuestionsByTitle 등의 더 구체적인 메서드명 사용

src/main/resources/elasticsearch/question-mapping.json (1)

1-29: 매핑 구성이 전반적으로 적절하지만 몇 가지 개선 사항을 고려해보세요.

  1. kaptCode 필드가 text 타입으로 설정되어 있는데, 정확한 매칭이 필요한 경우 keyword 타입 또는 multi-field 매핑을 고려해보세요.
  2. 인덱스 설정 파일(question-setting.json)과의 연관성을 확인하여 korean 분석기가 올바르게 정의되어 있는지 검증하세요.
    "kaptCode": {
-     "type": "text"
+     "type": "text",
+     "fields": {
+       "keyword": {
+         "type": "keyword",
+         "ignore_above": 256
+       }
+     }
    },
compose/logstash/logstash.conf (2)

1-9: 입력 파일 경로 패턴을 더 구체적으로 지정하는 것을 고려하세요.

현재 와일드카드 패턴 *이 너무 광범위할 수 있습니다. 보안과 성능을 위해 특정 파일 확장자나 패턴을 지정하는 것이 좋습니다.

   file {
     mode => "tail"
-    path => "/usr/share/logstash/ingest_data/*"
+    path => "/usr/share/logstash/ingest_data/*.log"
+    exclude => "*.tmp"
   }

12-13: 빈 필터 섹션에 대한 문서화를 추가하세요.

필터 섹션이 의도적으로 비어있는지 명확히 하고, 향후 데이터 변환이 필요한 경우를 대비한 주석을 추가하세요.

 filter {
+  # 현재 데이터 변환이 필요하지 않음
+  # 필요시 grok, mutate, date 등의 필터 플러그인 추가 가능
 }
src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java (2)

18-24: 클라이언트 구성을 더욱 견고하게 개선하세요.

현재 기본 구성만 사용하고 있어 타임아웃, 연결 풀 등의 설정이 부족합니다.

 @Override
 public ClientConfiguration clientConfiguration() {
     return ClientConfiguration.builder()
             .connectedTo(host)
             .usingSsl(false)
+            .withConnectTimeout(Duration.ofSeconds(5))
+            .withSocketTimeout(Duration.ofSeconds(30))
+            .withConnectionRequestTimeout(Duration.ofSeconds(5))
+            .withMaxRetryTimeoutMillis(30000)
             .build();
 }

10-13: 로깅 설정을 추가하여 디버깅을 개선하세요.

Elasticsearch 연결 상태와 구성 정보에 대한 로깅을 추가하면 문제 해결에 도움이 됩니다.

+import lombok.extern.slf4j.Slf4j;
+
+@Slf4j
 @Configuration
 @Profile("local")
 @EnableElasticsearchRepositories(basePackages = "com.zipte.platform.server.adapter.out.external.elk")
 public class ElasticSearchConfig extends ElasticsearchConfiguration {
+
+    @PostConstruct
+    public void logConfiguration() {
+        log.info("Elasticsearch local 환경 구성 완료 - 호스트: {}, SSL: 비활성화", host);
+    }
src/main/java/com/zipte/platform/server/application/out/community/QuestionPort.java (1)

7-7: 와일드카드 import 사용을 지양하세요.

구체적인 클래스만 import하는 것이 코드 가독성과 유지보수성에 더 좋습니다.

-import java.util.*;
+import java.util.Optional;
+import java.util.List;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17623ce and ae532f2.

📒 Files selected for processing (17)
  • build.gradle (1 hunks)
  • compose/docker-compose.yml (3 hunks)
  • compose/es/es-entrypoint.sh (1 hunks)
  • compose/logstash/logstash.conf (1 hunks)
  • src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java (1 hunks)
  • src/main/java/com/zipte/platform/core/config/external/ElasticSearchDevConfig.java (1 hunks)
  • src/main/java/com/zipte/platform/server/adapter/in/web/QaApi.java (2 hunks)
  • src/main/java/com/zipte/platform/server/adapter/in/web/dto/response/QuestionResponse.java (2 hunks)
  • src/main/java/com/zipte/platform/server/adapter/in/web/swagger/QaApiSpec.java (3 hunks)
  • src/main/java/com/zipte/platform/server/adapter/out/QuestionPersistenceAdapter.java (4 hunks)
  • src/main/java/com/zipte/platform/server/adapter/out/external/elk/community/QuestionELKDocument.java (1 hunks)
  • src/main/java/com/zipte/platform/server/adapter/out/external/elk/community/QuestionELKRepository.java (1 hunks)
  • src/main/java/com/zipte/platform/server/application/in/community/QuestionUseCase.java (2 hunks)
  • src/main/java/com/zipte/platform/server/application/out/community/QuestionPort.java (2 hunks)
  • src/main/java/com/zipte/platform/server/application/service/QuestionService.java (1 hunks)
  • src/main/resources/elasticsearch/question-mapping.json (1 hunks)
  • src/main/resources/elasticsearch/question-setting.json (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (19)
build.gradle (1)

155-157: Elasticsearch 의존성 추가 승인

Spring Data Elasticsearch 스타터 의존성이 올바르게 추가되었습니다. ELK 스택 통합에 필요한 기본 의존성입니다.

src/main/java/com/zipte/platform/server/adapter/in/web/dto/response/QuestionResponse.java (1)

22-28: 리스트 변환 메서드 구현 승인

리스트를 변환하는 정적 팩토리 메서드가 올바르게 구현되어 있습니다. 기존 단일 객체 변환 메서드를 재사용하여 일관성을 유지하고 있습니다.

src/main/java/com/zipte/platform/server/adapter/in/web/swagger/QaApiSpec.java (1)

11-11: Import 구문 추가 승인

새로운 검색 API를 위한 필요한 import 구문들이 올바르게 추가되었습니다.

Also applies to: 22-22, 24-24

src/main/java/com/zipte/platform/server/application/out/community/QuestionPort.java (2)

18-18: 주석 업데이트가 적절합니다.

"아파트 이름별"에서 "아파트별"로 변경하여 더 명확해졌습니다.


21-25: 키워드 검색 메서드들이 잘 설계되었습니다.

페이징 지원 메서드와 비페이징 메서드 모두 제공하여 다양한 사용 사례를 지원합니다. 메서드 시그니처도 일관성 있게 설계되었습니다.

src/main/java/com/zipte/platform/core/config/external/ElasticSearchDevConfig.java (3)

10-13: 프로파일과 저장소 설정이 적절합니다.

@Profile("dev")로 개발 환경에서만 활성화되고, @EnableElasticsearchRepositories로 올바른 패키지를 지정했습니다.


15-22: 환경 변수를 통한 설정값 주입이 적절합니다.

외부 설정을 통해 연결 정보를 관리하는 것은 좋은 방식입니다.


24-30: 클라이언트 구성이 올바르게 구현되었습니다.

기본 인증을 포함한 Elasticsearch 클라이언트 구성이 적절합니다.

src/main/java/com/zipte/platform/server/adapter/out/external/elk/community/QuestionELKRepository.java (2)

9-9: 저장소 인터페이스 정의가 적절합니다.

ElasticsearchRepository를 상속하여 기본 CRUD 기능과 함께 커스텀 검색 메서드를 제공합니다.


11-16: 메서드 시그니처 중복 문제가 있습니다.

findByTitleAndKaptCode 메서드가 두 번 정의되어 있습니다 (line 11과 15). 반환 타입만 다른 오버로딩은 Spring Data에서 지원되지 않을 수 있습니다.

-Page<QuestionELKDocument> findByTitleAndKaptCode(String title, String kaptCode, Pageable pageable);
-
 List<QuestionELKDocument> findByTitle(String title);

-List<QuestionELKDocument> findByTitleAndKaptCode(String title, String kaptCode);
+Page<QuestionELKDocument> findByTitleAndKaptCode(String title, String kaptCode, Pageable pageable);
+List<QuestionELKDocument> findByTitleAndKaptCodeOrderByCreatedAtDesc(String title, String kaptCode);

Likely an incorrect or invalid review comment.

src/main/java/com/zipte/platform/server/application/service/QuestionService.java (1)

107-111: 키워드 검색 메서드가 적절히 구현되었습니다.

port에 위임하는 방식이 깔끔하고, 주석도 명확합니다.

src/main/java/com/zipte/platform/server/adapter/out/external/elk/community/QuestionELKDocument.java (2)

13-17: Elasticsearch 문서 어노테이션이 잘 구성되었습니다.

인덱스 설정과 매핑 파일을 외부에서 관리하는 것은 좋은 방식입니다.


20-40: 필드 매핑이 적절합니다.

각 필드의 타입과 날짜 형식이 올바르게 설정되었습니다.

src/main/java/com/zipte/platform/server/adapter/out/QuestionPersistenceAdapter.java (3)

3-4: 새로운 ELK 관련 import 문이 추가되었습니다.

Elasticsearch 통합을 위한 import 문들이 적절하게 추가되었습니다.

Also applies to: 14-14


23-24: ELK 저장소 의존성 주입이 추가되었습니다.

QuestionELKRepository 의존성이 적절하게 추가되어 dual persistence 패턴을 구현할 수 있게 되었습니다.


55-66: 키워드 검색 메서드가 적절하게 구현되었습니다.

ELK를 활용한 키워드 검색 기능이 페이징 지원과 함께 잘 구현되었습니다. 메서드명과 파라미터 순서도 직관적입니다.

compose/docker-compose.yml (3)

1-18: 볼륨과 네트워크 구성이 적절합니다.

ELK 스택의 데이터 영속성과 서비스 간 통신을 위한 볼륨과 네트워크 구성이 잘 되어 있습니다.


190-222: Kibana 서비스 설정이 적절하게 구성되었습니다.

Elasticsearch와의 SSL 연결 및 암호화 키 설정이 올바르게 되어 있습니다.


273-294: Logstash 설정 파일 경로 확인 완료

  • 모니터링 비활성화 및 환경 변수 설정이 적절합니다.
  • ./compose/logstash/logstash.conf 경로에 설정 파일이 실제로 존재하며, Docker Compose 볼륨 마운트 경로도 정확합니다.

@@ -0,0 +1,12 @@
#!/bin/bash
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

스크립트 실행 권한 확인 필요

스크립트 파일의 실행 권한이 올바르게 설정되어 있는지 확인이 필요합니다.

다음 스크립트로 파일 권한을 확인해주세요:


🏁 Script executed:

#!/bin/bash
# 스크립트 파일의 권한 확인
ls -la compose/es/es-entrypoint.sh

Length of output: 112


compose/es/es-entrypoint.sh에 실행 권한 (+x) 추가 필요

현재 compose/es/es-entrypoint.sh 파일의 권한이 -rw-r--r--로 설정되어 있어 스크립트를 실행할 수 없습니다. CI/CD 파이프라인이나 직접 실행 시 오류를 방지하려면 실행 권한을 부여해주세요.

  • 파일 위치: compose/es/es-entrypoint.sh
  • 현재 권한:
    -rw-r--r-- 1 jailuser jailuser 359 Jun 28 06:37 compose/es/es-entrypoint.sh
  • 제안 명령어:
    chmod +x compose/es/es-entrypoint.sh
  • 변경 후 권한 예시:
    -rwxr-xr-x 1 jailuser jailuser 359 Jun 28 06:37 compose/es/es-entrypoint.sh
🤖 Prompt for AI Agents
In compose/es/es-entrypoint.sh at line 1, the script file lacks execute
permissions, currently set to -rw-r--r--, which prevents it from running. Fix
this by adding execute permissions using the command chmod +x
compose/es/es-entrypoint.sh so the file becomes executable with permissions like
-rwxr-xr-x.

Comment on lines +4 to +9
if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
echo "Nori plugin not found, installing..."
elasticsearch-plugin install analysis-nori --batch
else
echo "Nori plugin already installed"
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

플러그인 설치 에러 핸들링 개선 필요

nori 플러그인 설치 시 실패할 경우에 대한 에러 핸들링이 누락되어 있습니다. 네트워크 문제나 권한 문제로 설치가 실패할 경우 컨테이너가 정상적으로 시작되지 않을 수 있습니다.

다음과 같이 에러 핸들링을 추가하는 것을 권장합니다:

 if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
   echo "Nori plugin not found, installing..."
-  elasticsearch-plugin install analysis-nori --batch
+  if ! elasticsearch-plugin install analysis-nori --batch; then
+    echo "Failed to install nori plugin" >&2
+    exit 1
+  fi
+  echo "Nori plugin installed successfully"
 else
   echo "Nori plugin already installed"
 fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
echo "Nori plugin not found, installing..."
elasticsearch-plugin install analysis-nori --batch
else
echo "Nori plugin already installed"
fi
if [ ! -d "/usr/share/elasticsearch/plugins/analysis-nori" ]; then
echo "Nori plugin not found, installing..."
if ! elasticsearch-plugin install analysis-nori --batch; then
echo "Failed to install nori plugin" >&2
exit 1
fi
echo "Nori plugin installed successfully"
else
echo "Nori plugin already installed"
fi
🤖 Prompt for AI Agents
In compose/es/es-entrypoint.sh around lines 4 to 9, the script installs the nori
plugin but lacks error handling for installation failures. Modify the script to
check the exit status of the elasticsearch-plugin install command, and if it
fails, output an error message and exit the script with a non-zero status to
prevent the container from starting improperly.

},
"createdAt": {
"type": "date",
"format": "yyyy-MM-dd'T'HH:mm:ss.SSS||epoch_millis"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

날짜 형식 패턴을 검증하세요.

날짜 필드의 형식 패턴이 애플리케이션에서 사용하는 실제 날짜 형식과 일치하는지 확인하세요.

다음 스크립트를 실행하여 Java 코드에서 사용되는 날짜 형식을 확인하세요:

Also applies to: 26-26


🏁 Script executed:

#!/bin/bash
# Java 코드에서 날짜 형식 패턴 확인
rg -A 3 -B 3 "yyyy-MM-dd.*HH:mm:ss" --type java
rg -A 3 -B 3 "DateTimeFormatter|SimpleDateFormat" --type java

Length of output: 1413


날짜 형식 패턴 매핑 검증 및 통일 필요

Elasticsearch 매핑(src/main/resources/elasticsearch/question-mapping.json)의

"format": "yyyy-MM-dd'T'HH:mm:ss.SSS||epoch_millis"

이 Java 코드(src/main/java/com/zipte/platform/core/util/DateFormatUtil.java)에서 사용하는 패턴과 일치하지 않습니다. 현재 Java 쪽에는 ISO 8601 형태의 패턴이 정의되어 있지 않아, 문서 색인 또는 검색 시 날짜 파싱 오류가 발생할 수 있습니다.

수정이 필요한 위치:

  • src/main/resources/elasticsearch/question-mapping.json
    – 라인 22, 26: "format": "yyyy-MM-dd'T'HH:mm:ss.SSS||epoch_millis"
  • src/main/java/com/zipte/platform/core/util/DateFormatUtil.java
    – DateTimeFormatter.ofPattern("yyyy년 MM월 dd일", Locale.KOREA)

제안:

  • Java 코드에 ISO 8601 포맷("yyyy-MM-dd'T'HH:mm:ss.SSS")을 처리할 DateTimeFormatter를 추가하거나
  • Elasticsearch 매핑의 format을 실제 Java 코드에서 사용하는 포맷으로 변경하세요.
🤖 Prompt for AI Agents
In src/main/resources/elasticsearch/question-mapping.json at lines 22 and 26 and
in src/main/java/com/zipte/platform/core/util/DateFormatUtil.java where the
DateTimeFormatter is defined, the date format patterns are inconsistent. To fix
this, either update the Java DateTimeFormatter to include the ISO 8601 pattern
"yyyy-MM-dd'T'HH:mm:ss.SSS" so it matches the Elasticsearch mapping, or modify
the Elasticsearch mapping's "format" field to use the same pattern as the Java
code. Ensure both sides use the same date format pattern to prevent parsing
errors.

Comment on lines +28 to +33
/// 특정 키워드 조회
Page<Question> loadQuestionsByKeyword(String kaptCode, String keyword, Pageable pageable);


List<Question> loadQuestionsByKeyword(String kaptCode, String keyword);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

메서드 문서화를 개선하고 매개변수 검증을 고려하세요.

새로 추가된 키워드 검색 메서드들의 Javadoc 주석이 부족합니다. 매개변수 설명과 반환값, 예외 상황에 대한 문서화를 추가해주세요.

+   /**
+    * 키워드로 질문 검색 (페이지네이션)
+    * @param kaptCode 아파트 단지 코드
+    * @param keyword 검색 키워드
+    * @param pageable 페이지 정보
+    * @return 검색된 질문 목록 (페이지네이션)
+    * @throws IllegalArgumentException 매개변수가 유효하지 않은 경우
+    */
    Page<Question> loadQuestionsByKeyword(String kaptCode, String keyword, Pageable pageable);

+   /**
+    * 키워드로 질문 검색 (전체 목록)
+    * @param kaptCode 아파트 단지 코드  
+    * @param keyword 검색 키워드
+    * @return 검색된 질문 전체 목록
+    * @throws IllegalArgumentException 매개변수가 유효하지 않은 경우
+    */
    List<Question> loadQuestionsByKeyword(String kaptCode, String keyword);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// 특정 키워드 조회
Page<Question> loadQuestionsByKeyword(String kaptCode, String keyword, Pageable pageable);
List<Question> loadQuestionsByKeyword(String kaptCode, String keyword);
/**
* 키워드로 질문 검색 (페이지네이션)
* @param kaptCode 아파트 단지 코드
* @param keyword 검색 키워드
* @param pageable 페이지 정보
* @return 검색된 질문 목록 (페이지네이션)
* @throws IllegalArgumentException 매개변수가 유효하지 않은 경우
*/
Page<Question> loadQuestionsByKeyword(String kaptCode, String keyword, Pageable pageable);
/**
* 키워드로 질문 검색 (전체 목록)
* @param kaptCode 아파트 단지 코드
* @param keyword 검색 키워드
* @return 검색된 질문 전체 목록
* @throws IllegalArgumentException 매개변수가 유효하지 않은 경우
*/
List<Question> loadQuestionsByKeyword(String kaptCode, String keyword);
🤖 Prompt for AI Agents
In
src/main/java/com/zipte/platform/server/application/in/community/QuestionUseCase.java
around lines 28 to 33, the newly added methods for loading questions by keyword
lack proper Javadoc comments. Add detailed Javadoc for each method including
descriptions for all parameters, the return type, and any exceptions that might
be thrown. Also, consider adding parameter validation inside the methods or
document expected behavior when invalid parameters are passed.

Comment on lines +16 to +24
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

오류 처리 및 인덱스 관리 개선사항을 고려하세요.

현재 구성에 다음 개선사항을 고려해보세요:

  1. 데드 레터 큐(DLQ) 설정
  2. 인덱스 라이프사이클 관리
  3. 배치 처리 최적화
 output {
   elasticsearch {
     index => "logstash-%{+YYYY.MM.dd}"
     hosts=> "${ELASTIC_HOSTS}"
     user=> "${ELASTIC_USER}"
     password=> "${ELASTIC_PASSWORD}"
     cacert=> "certs/ca/ca.crt"
+    retry_on_conflict => 3
+    retry_initial_interval => 2
+    retry_max_interval => 64
   }
+  # 실패한 이벤트 처리
+  if "_grokparsefailure" in [tags] {
+    file {
+      path => "/usr/share/logstash/failed_events.log"
+    }
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
}
}
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
retry_on_conflict => 3
retry_initial_interval => 2
retry_max_interval => 64
}
# 실패한 이벤트 처리
if "_grokparsefailure" in [tags] {
file {
path => "/usr/share/logstash/failed_events.log"
}
}
}
🤖 Prompt for AI Agents
In compose/logstash/logstash.conf around lines 16 to 24, improve error handling
and index management by adding a dead letter queue (DLQ) configuration to
capture failed events, enable index lifecycle management (ILM) to automate index
rollover and retention policies, and optimize batch processing settings such as
batch size and delay to enhance throughput and resource usage. Update the
elasticsearch output plugin configuration accordingly to include these features.

Comment on lines +69 to 77
/// 삭제
@Override
public void deleteQuestionById(Long id) {
/// DB에서 삭제
repository.deleteById(id);

/// ELK에서 삭제
elkRepository.deleteById(id);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

삭제 메서드에서 데이터 일관성 문제가 있습니다.

JPA에서 먼저 삭제하고 ELK에서 삭제하는 순서로 되어 있는데, ELK 삭제가 실패할 경우 데이터 불일치가 발생할 수 있습니다.

다음과 같이 수정하는 것을 권장합니다:

 @Override
 public void deleteQuestionById(Long id) {
-    /// DB에서 삭제
-    repository.deleteById(id);
-
-    /// ELK에서 삭제
-    elkRepository.deleteById(id);
+    try {
+        /// ELK에서 먼저 삭제 시도
+        elkRepository.deleteById(id);
+        /// DB에서 삭제
+        repository.deleteById(id);
+    } catch (Exception e) {
+        log.error("Failed to delete question: {}", id, e);
+        throw new RuntimeException("질문 삭제 중 오류가 발생했습니다.", e);
+    }
 }

또는 트랜잭션 범위에서 예외 발생 시 롤백 처리를 고려해보세요.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// 삭제
@Override
public void deleteQuestionById(Long id) {
/// DB에서 삭제
repository.deleteById(id);
/// ELK에서 삭제
elkRepository.deleteById(id);
}
/// 삭제
@Override
public void deleteQuestionById(Long id) {
try {
/// ELK에서 먼저 삭제 시도
elkRepository.deleteById(id);
/// DB에서 삭제
repository.deleteById(id);
} catch (Exception e) {
log.error("Failed to delete question: {}", id, e);
throw new RuntimeException("질문 삭제 중 오류가 발생했습니다.", e);
}
}
🤖 Prompt for AI Agents
In
src/main/java/com/zipte/platform/server/adapter/out/QuestionPersistenceAdapter.java
around lines 69 to 77, the deleteQuestionById method deletes from the JPA
repository first and then from the ELK repository, which can cause data
inconsistency if ELK deletion fails. To fix this, either reverse the deletion
order by deleting from ELK first and then from JPA, or implement a transaction
management mechanism that rolls back both deletions if any exception occurs
during the process to ensure data consistency.

Comment on lines +153 to +154
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

커스텀 엔트리포인트 스크립트 경로를 확인해주세요.

엔트리포인트 스크립트의 경로가 ./es-entrypoint.sh로 되어 있는데, 실제 파일 위치와 일치하는지 확인이 필요합니다.

다음 스크립트로 파일 존재 여부를 확인해보세요:


🏁 Script executed:

#!/bin/bash
# es-entrypoint.sh 파일 위치 확인
fd "es-entrypoint.sh" . --type f

Length of output: 62


엔트리포인트 스크립트 경로 수정 필요

파일 시스템에서 스크립트가 compose/es/es-entrypoint.sh에 위치하므로, compose/docker-compose.ymlvolumes 설정을 아래와 같이 수정하세요.

  • 대상 파일: compose/docker-compose.yml
  • 대상 라인: 153–154

수정 전:

      - ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh

수정 후:

      - ./es/es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
🤖 Prompt for AI Agents
In compose/docker-compose.yml at lines 153 to 154, the volume mapping for the
entrypoint script is incorrect because the script is located in the compose/es
directory. Update the volume path from ./es-entrypoint.sh to
./es/es-entrypoint.sh to correctly map the script into the container.

Comment on lines +224 to +271
metricbeat01:
container_name: zipte-metricbeat
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/metricbeat/certs
- metricbeatdata01:/usr/share/metricbeat/data
- "./metricbeat/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"
- "/proc:/hostfs/proc:ro"
- "/:/hostfs:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600
networks:
- backend-bridge

filebeat01:
container_name: zipte-filebeat
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/beats/filebeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/filebeat/certs
- filebeatdata01:/usr/share/filebeat/data
- "./filebeat/filebeat_ingest_data/:/usr/share/filebeat/ingest_data/"
- "./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "/var/lib/docker/containers:/var/lib/docker/containers:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600
networks:
- backend-bridge
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Beats 서비스의 권한 설정을 검토해주세요.

Metricbeat과 Filebeat가 root 권한으로 실행되고 있습니다. 보안 관점에서 최소 권한 원칙을 적용하는 것이 좋습니다.

   metricbeat01:
     container_name: zipte-metricbeat
     depends_on:
       es01:
         condition: service_healthy
       kibana:
         condition: service_healthy
     image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
-    user: root
+    user: "1000:1000"

root 권한이 꼭 필요한 기능이 있다면 해당 부분만 권한을 제한하는 방법을 고려해보세요.

🤖 Prompt for AI Agents
In compose/docker-compose.yml between lines 224 and 271, Metricbeat and Filebeat
services are configured to run as root user, which violates the principle of
least privilege. To fix this, remove the 'user: root' line from both service
definitions and configure the containers to run under a non-root user with only
the necessary permissions. If root privileges are absolutely required for
specific operations, isolate those operations and limit root access only to
those parts, possibly by adjusting file permissions or using capabilities
instead of running the entire container as root.

Comment on lines +20 to +76
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: zipte-setup
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
networks:
- backend-bridge
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Setup 서비스의 보안 설정은 우수하지만 개선이 필요합니다.

SSL 인증서 생성과 사용자 비밀번호 설정 로직이 잘 구현되어 있습니다. 하지만 몇 가지 보안 및 운영상 개선점이 있습니다.

  1. 인증서 갱신 처리: 인증서 만료에 대한 처리가 없습니다.
  2. 비밀번호 검증 강화: 현재는 빈 값만 체크하지만 강도 검증이 필요합니다.
  3. 타임아웃 설정: 120초 재시도는 너무 길 수 있습니다.
     healthcheck:
       test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
-      interval: 1s
+      interval: 5s
       timeout: 5s
-      retries: 120
+      retries: 60
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: zipte-setup
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
networks:
- backend-bridge
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: zipte-setup
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
# … (omitted for brevity)
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 5s
timeout: 5s
retries: 60
networks:
- backend-bridge
🤖 Prompt for AI Agents
In compose/docker-compose.yml from lines 20 to 76, improve the setup service by
adding logic to handle SSL certificate renewal before expiration, enhance
password validation to check for strength and complexity rather than just
non-empty values, and reduce the healthcheck retry timeout from 120 seconds to a
shorter, more reasonable duration to avoid long waits during failures.

Comment on lines +142 to +188
es01:
container_name: zipte-es
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Elasticsearch 서비스 설정에서 보안 강화가 필요합니다.

SSL 및 보안 설정이 잘 구성되어 있지만, 운영 환경에서 추가 보안 조치가 필요합니다.

다음 개선사항을 고려해보세요:

     environment:
       - node.name=es01
       - cluster.name=${CLUSTER_NAME}
       - discovery.type=single-node
       - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
       - bootstrap.memory_lock=true
       - xpack.security.enabled=true
+      - xpack.security.audit.enabled=true
+      - xpack.security.audit.logfile.events.include=access_denied,access_granted,anonymous_access_denied,authentication_failed,connection_denied,tampered_request
       - xpack.security.http.ssl.enabled=true

또한 메모리 제한 설정도 추가하는 것을 권장합니다:

     ulimits:
       memlock:
         soft: -1
         hard: -1
+    deploy:
+      resources:
+        limits:
+          memory: 2g
+        reservations:
+          memory: 1g
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
es01:
container_name: zipte-es
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge
es01:
container_name: zipte-es
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
- ./es-entrypoint.sh:/usr/local/bin/es-entrypoint.sh
entrypoint: [ "/usr/local/bin/es-entrypoint.sh" ]
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- xpack.security.audit.logfile.events.include=access_denied,access_granted,anonymous_access_denied,authentication_failed,connection_denied,tampered_request
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 2g
reservations:
memory: 1g
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
networks:
- backend-bridge
🤖 Prompt for AI Agents
In compose/docker-compose.yml between lines 142 and 188, enhance Elasticsearch
service security by adding memory limit settings under resources to prevent
excessive resource usage in production. Review and tighten SSL configurations if
needed, and consider adding environment variables or Docker secrets for
sensitive data instead of plain environment variables. Also, ensure ulimits and
healthcheck settings are appropriate for production stability and security.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
.github/workflows/ci-test.yml (1)

132-140: Nori 플러그인 설치 후 재가동 지연만으로는 부족합니다

재시작 이후 클러스터가 다시 yellow 이상이 될 때까지 확인하지 않으면, 바로 이어지는 Gradle 빌드 단계에서 연결 오류가 발생할 수 있습니다. 플러그인 설치 직후에도 동일한 헬스체크 루프를 넣어 주시면 좋겠습니다.

 docker restart $CONTAINER_ID
-sleep 30
+
+# 클러스터 재시작 대기
+until curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow\&timeout=1s; do
+  echo "Waiting for Elasticsearch after plugin install..."
+  sleep 3
+done

또한 GitHub Actions 는 서비스 컨테이너 ID를 ${{ job.services.elasticsearch.id }} 로 노출하므로 docker ps 필터 대신 이를 사용하는 것이 명시적이고 빠릅니다.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ae532f2 and 48ab6d4.

📒 Files selected for processing (2)
  • .github/workflows/ci-test.yml (2 hunks)
  • src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/ci-test.yml

[error] 130-130: trailing spaces

(trailing-spaces)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test

Comment on lines +124 to +131
- name: Elasticsearch 체크
run: |
until curl -sSf http://localhost:9200/_cluster/health; do
echo "Waiting for Elasticsearch..."
sleep 5
done

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Elasticsearch 준비 확인 스크립트 보강 및 YAMLlint 경고 해결

  1. 동일하게 wait_for_status=yellow 로 대기하지 않으면 플러그인 설치 직전에 상태가 red 인 경우가 있습니다.
  2. 130번째 줄에 공백(trailing space)이 있어 YAMLlint 에러가 발생했습니다.
-          until curl -sSf http://localhost:9200/_cluster/health; do  
-            echo "Waiting for Elasticsearch..."  
-            sleep 5  
-          done  
+          until curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow\&timeout=1s; do
+            echo "Waiting for Elasticsearch (cluster not yet YELLOW)..."
+            sleep 3
+          done

위 수정으로 안정성과 정적 분석 모두 해결됩니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Elasticsearch 체크
run: |
until curl -sSf http://localhost:9200/_cluster/health; do
echo "Waiting for Elasticsearch..."
sleep 5
done
- name: Elasticsearch 체크
run: |
until curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow\&timeout=1s; do
echo "Waiting for Elasticsearch (cluster not yet YELLOW)..."
sleep 3
done
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 130-130: trailing spaces

(trailing-spaces)

🤖 Prompt for AI Agents
In .github/workflows/ci-test.yml around lines 124 to 131, update the
Elasticsearch readiness check script to wait until the cluster health status is
at least yellow by adding a query parameter like wait_for_status=yellow to the
curl command. Also, remove any trailing whitespace on line 130 to fix the
YAMLlint warning. This will improve the stability of the check and resolve the
linting error.

Comment on lines +43 to +55
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Elasticsearch 서비스 정의: Health-check 조건이 너무 완화되어 있습니다

현재 --health-cmd 는 HTTP 200 응답만 확인하고 있어 클러스터가 red 상태여도 통과할 수 있습니다. 이후 단계(플러그인 설치, 테스트)에서 불안정성을 유발할 가능성이 큽니다. wait_for_status=yellow 옵션을 활용해 최소한 yellow 이상일 때만 Healthy 로 간주하도록 개선해 주세요.

-          --health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
+          --health-cmd "curl -f http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s || exit 1"
+          --health-start-period 10s   # 초기 부팅 시간을 고려

이 변경만으로도 CI 플레이크를 크게 줄일 수 있습니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s || exit 1"
--health-start-period 10s # 초기 부팅 시간을 고려
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
🤖 Prompt for AI Agents
In .github/workflows/ci-test.yml between lines 43 and 55, the Elasticsearch
health check command only verifies an HTTP 200 response, allowing the cluster to
be considered healthy even if its status is red. Update the --health-cmd to use
curl with the parameter wait_for_status=yellow to ensure the cluster is at least
yellow status before marking it healthy. This involves modifying the health
check URL to http://localhost:9200/_cluster/health?wait_for_status=yellow and
keeping the existing failure handling.

@github-actions
Copy link

Test Coverage Report

File Coverage [62.69%] 🍏
QuestionELKDocument.java 66.18% 🍏
QuestionPersistenceAdapter.java 63.51% 🍏
QaApi.java 62% 🍏
QuestionService.java 61.25% 🍏
Total Project Coverage 33.76% 🍏

@doup2001 doup2001 force-pushed the feat/ZIP-84-Review-ELK branch from 48ab6d4 to d197ea8 Compare June 28, 2025 06:59
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
.github/workflows/ci-test.yml (2)

48-51: wait_for_status=yellow 파라미터가 여전히 누락되어 있습니다
클러스터가 red 상태여도 헬스체크가 통과할 수 있어 CI 플래이크 가능성이 큽니다. 지난 리뷰에서도 동일 지적이 있었습니다.

-          --health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
+          --health-cmd "curl -f http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s || exit 1"
+          --health-start-period 10s

124-130: 엘라스틱서치 대기 루프도 wait_for_status=yellow 적용 + 트레일링 스페이스 제거 필요

  1. readiness check가 200 OK 만 확인해 red 상태를 통과시킵니다.
  2. 130행 끝 공백으로 YAMLlint 오류가 발생합니다.
-until curl -sSf http://localhost:9200/_cluster/health; do
-  echo "Waiting for Elasticsearch..."
-  sleep 5
-done 
+until curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow\&timeout=1s; do
+  echo "Waiting for Elasticsearch (cluster not yet YELLOW)..."
+  sleep 3
+done
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 48ab6d4 and d197ea8.

📒 Files selected for processing (3)
  • .github/workflows/ci-test.yml (2 hunks)
  • .github/workflows/dev-ci-cd.yml (3 hunks)
  • src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/java/com/zipte/platform/core/config/external/ElasticSearchConfig.java
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/ci-test.yml

[error] 130-130: trailing spaces

(trailing-spaces)

🔇 Additional comments (2)
.github/workflows/ci-test.yml (1)

43-55: STACK_VERSION 환경변수 정의 여부 확인 필요
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} 를 사용하지만, 워크플로 파일 상단이나 env: 블록에 변수 정의가 보이지 않습니다. 정의되지 않으면 invalid reference format 오류로 컨테이너가 뜨지 않아 전체 잡이 실패합니다.

  • 저장소/ORG 수준 secrets·variables 혹은 workflow env: 에 설정되어 있는지 점검해 주세요.
  • 없으면 예시처럼 명시적 선언을 권장합니다.
env:
  STACK_VERSION: "7.17.0"
.github/workflows/dev-ci-cd.yml (1)

103-109: 헬스 체크에 인증 누락 가능성

보안을 활성화한 상태(기본 8.x)라면 다음 커맨드는 401 오류를 반환해 무한 루프에 빠집니다.

until curl -sSf http://localhost:9200/_cluster/health; do

보안을 끄지 않았다면 Basic Auth 추가가 필요합니다.

-until curl -sSf http://localhost:9200/_cluster/health; do
+until curl -sSf -u "elastic:${ELASTIC_PASSWORD}" http://localhost:9200/_cluster/health; do

앞선 코멘트대로 xpack.security.enabled: "false"를 지정한다면 변경 없이도 통과합니다. 선택한 방식을 확인 후 일관성 있게 적용해 주세요.

Comment on lines +135 to +139
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Nori 플러그인 설치 로직이 버전에 고정되어 있어 깨질 가능성

docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0 로 컨테이너를 찾고 있어
STACK_VERSION 과 다른 버전을 사용할 경우 ID 탐색에 실패 → 플러그인 미설치/실패가 발생합니다.

-CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
+CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION})

또한 찾은 ID가 비어있을 때를 대비한 오류 처리(set -euo pipefail + if-문)도 추가하면 안정성이 높아집니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION})
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
🤖 Prompt for AI Agents
In .github/workflows/ci-test.yml around lines 135 to 139, the Elasticsearch
container ID is fetched using a hardcoded version tag which can cause failures
if STACK_VERSION differs. Replace the fixed version with the dynamic
STACK_VERSION variable to ensure the correct container is targeted.
Additionally, add error handling to check if the container ID is empty before
proceeding, and enable strict shell options like set -euo pipefail to improve
script robustness.

Comment on lines 39 to 51
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
ports:
- 9200:9200
options: >-
--health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 10
env:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

STACK_VERSION 변수 미정의 및 8.x 보안 기본값 대응 누락

  1. ${STACK_VERSION}가 워크플로우 어디에서도 정의되지 않았습니다. 기본값이 없을 경우 이미지 풀에 실패합니다.
  2. Elasticsearch 8.x 부터는 xpack.security.enabled 가 기본적으로 true이므로 인증 없이 / _cluster/health 호출이 401 로 거절돼 헬스체크가 실패합니다.

아래와 같이(예시) 버전을 명시하고 보안을 비활성화하거나, 인증 정보와 ELASTIC_PASSWORD를 함께 설정해 주세요.

       elasticsearch:
-        image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
+        image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0   # 예: 고정 버전
         ports:
           - 9200:9200
         options: >-
           --health-cmd "curl -f http://localhost:9200/_cluster/health || exit 1"
           --health-interval 10s
           --health-timeout 5s
           --health-retries 10
         env:
           discovery.type: single-node
           ES_JAVA_OPTS: "-Xms512m -Xmx512m"
+          xpack.security.enabled: "false"          # 8.x 사용 시 필수, 7.x는 선택

또는 env: 블록 상위에

env:
  STACK_VERSION: 7.17.0
  ELASTIC_PASSWORD: ${{ secrets.ELASTIC_PASSWORD }}

을 선언하고 --health-cmd·curl 부분에 Basic Auth를 넣는 방식도 가능합니다.

이대로 머지되면 CI 단계에서 Elasticsearch 컨테이너가 기동/헬시 판정에 실패할 가능성이 높습니다.

🤖 Prompt for AI Agents
In .github/workflows/dev-ci-cd.yml around lines 39 to 51, the STACK_VERSION
variable is not defined, causing the Elasticsearch image pull to fail, and the
health check fails on Elasticsearch 8.x due to default security settings
requiring authentication. Define STACK_VERSION with a default value like 7.17.0
in the env section, and either disable security by setting
xpack.security.enabled to false in the environment variables or configure
ELASTIC_PASSWORD and update the health check curl command to include basic
authentication headers using the password. This ensures the Elasticsearch
container starts and passes health checks in the CI pipeline.

Comment on lines +110 to 118
# 3-2. Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Nori 플러그인 설치 단계 – 버전 불일치 및 컨테이너 검색 방식 개선

docker ps -q -f ancestor=...:7.17.0 로 컨테이너를 찾고 있어, 위에서 사용한 이미지 버전과 다르면 플러그인 설치 단계가 실패합니다.

-CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
+CONTAINER_ID=$(docker ps -q -f name=elasticsearch)

또는 동일한 ${STACK_VERSION} 변수를 재사용하세요.

플러그인 설치 → 컨테이너 재시작 흐름은 CI 시간 증가와 불안정을 초래합니다.
Dockerfile 을 커스텀하여 미리 플러그인을 포함하거나, ELASTICSEARCH_PLUGINS 환경변수(8.x 지원)로 설치를 자동화하는 접근을 권장합니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# 3-2. Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f ancestor=docker.elastic.co/elasticsearch/elasticsearch:7.17.0)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
# 3-2. Nori 플러그인 설치
- name: Install Nori plugin
run: |
CONTAINER_ID=$(docker ps -q -f name=elasticsearch)
echo "Elasticsearch container ID: $CONTAINER_ID"
docker exec $CONTAINER_ID bin/elasticsearch-plugin install analysis-nori --batch
docker restart $CONTAINER_ID
sleep 30
🤖 Prompt for AI Agents
In .github/workflows/dev-ci-cd.yml around lines 110 to 118, the Nori plugin
installation step uses a hardcoded Elasticsearch image version which may not
match the version used elsewhere, causing failures. Replace the hardcoded
version with the shared ${STACK_VERSION} variable to ensure consistency.
Additionally, avoid installing the plugin at runtime and restarting the
container as it increases CI time and instability; instead, modify the
Dockerfile to include the plugin during build or use the ELASTICSEARCH_PLUGINS
environment variable for automatic plugin installation if using Elasticsearch
8.x.

@github-actions
Copy link

Test Coverage Report

File Coverage [62.69%] 🍏
QuestionELKDocument.java 66.18% 🍏
QuestionPersistenceAdapter.java 63.51% 🍏
QaApi.java 62% 🍏
QuestionService.java 61.25% 🍏
Total Project Coverage 33.76% 🍏

@doup2001 doup2001 merged commit 26ff6c6 into develop Jun 28, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

📬 API 서버 API 통신 🌏 Deploy 배포 관련 ✨ Feature 기능 개발 ⚙ Setting 개발 환경 세팅

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants