Conversation
# Conflicts: # src/main/java/cc/backend/admin/amateurShow/service/AdminAmateurShowService.java
📝 WalkthroughWalkthroughMigrates in-process Spring ApplicationEvents to Kafka: adds Kafka infra, domain event types, producers/consumers, commit-phase Spring listeners to publish Kafka events AFTER_COMMIT, updates services/repositories/entities to emit commit events, and removes legacy in-process event DTOs and listeners. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant Admin as Admin Service
participant AdminSvc as AdminApprovalService
participant DB as Database/Transaction
end
rect rgba(200,255,200,0.5)
participant SpringEvt as Spring ApplicationEventBus
participant CommitListener as ApproveCommitEventListener
end
rect rgba(255,200,200,0.5)
participant Producer as ApprovalShowProducer
participant Kafka as Kafka Broker
participant ApprovalCons as ApprovalConsumer
participant LikerCons as LikerConsumer
participant RecommendCons as RecommendConsumer
participant NoticeSvc as NoticeService
end
Admin->>AdminSvc: approveShow()
AdminSvc->>DB: persist approval
DB-->>SpringEvt: publish ApproveCommitEvent (after commit)
SpringEvt->>CommitListener: onApproveCommit(event) [AFTER_COMMIT]
CommitListener->>Producer: publish(ApprovalShowEvent)
Producer->>Kafka: send to approval-show-topic
Kafka->>ApprovalCons: deliver ApprovalShowEvent
Kafka->>LikerCons: deliver ApprovalShowEvent
Kafka->>RecommendCons: deliver ApprovalShowEvent
ApprovalCons->>NoticeSvc: notifyApproval()
LikerCons->>NoticeSvc: notifyLikers()
RecommendCons->>NoticeSvc: notifyRecommendation()
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/java/cc/backend/board/service/BoardService.java (1)
348-369:⚠️ Potential issue | 🟠 MajorKafka message published inside an active transaction — risks inconsistency on rollback.
promoteToHotBoard()is called fromtoggleLike()which is@Transactional. ThehotBoardProducer.publish()on line 368 sends a Kafka message before the transaction commits. If the transaction rolls back after the Kafka send (e.g., due to a subsequent DB error), the message will still be in Kafka, leading to phantom notifications.The rest of the codebase correctly uses a two-phase pattern:
ApplicationEventPublisher→@TransactionalEventListener(AFTER_COMMIT)→ Kafka producer. This file should follow the same pattern for consistency and correctness.🐛 Suggested fix: use the same two-phase event pattern
- Create a
HotBoardCommitEventrecord (similar toTicketReservationCommitEvent).- Publish via
ApplicationEventPublisherinpromoteToHotBoard().- Create a
HotBoardCommitEventListenerwith@Async+@TransactionalEventListener(AFTER_COMMIT)that delegates toHotBoardProducer.- private final HotBoardProducer hotBoardProducer; + private final ApplicationEventPublisher eventPublisher; // in promoteToHotBoard(): - hotBoardProducer.publish(new HotBoardEvent(board.getId(), board.getMember().getId())); + eventPublisher.publishEvent(new HotBoardCommitEvent(board.getId(), board.getMember().getId()));
🤖 Fix all issues with AI agents
In `@docker-compose.yml`:
- Around line 55-82: Add a Docker healthcheck to the kafka service so Compose
can wait for Kafka to be ready (not just started) and update the dependents to
wait on service health; specifically, under the kafka service (the one with
KAFKA_LISTENERS/KAFKA_ADVERTISED_LISTENERS env vars) add a healthcheck that
probes the broker (e.g., attempts a TCP connect to the PLAINTEXT listener or
runs a lightweight Kafka client check) with sensible interval, timeout, retries
and start_period settings, and then modify the depends_on entries for app-blue
and app-green to use condition: service_healthy so they only start when kafka is
healthy.
In `@src/main/java/cc/backend/amateurShow/repository/AmateurShowRepository.java`:
- Around line 88-89: The query method findHashtagsByMemberId can return null
elements because AmateurShow.hashtag is nullable; update the query to exclude
nulls (e.g., add "AND s.hashtag IS NOT NULL") or modify the service path by
filtering nulls before splitting in NoticeServiceImpl (where it currently calls
.split("#") on each element) so no null value is passed to split; locate
findHashtagsByMemberId and the splitting logic in NoticeServiceImpl and apply
one of these fixes to prevent NullPointerException.
In `@src/main/java/cc/backend/image/service/ImageService.java`:
- Around line 41-43: The saveImage method currently returns null when
requestDTO.getKeyName() is missing, which is fragile; instead validate using
isBlank() (consistent with updateShowImage) and throw a clear exception
(IllegalArgumentException or your GeneralException) when keyName is blank; keep
saveImages as the sole caller that tolerates missing key names by relying on its
existing catch (GeneralException ex) handling or by explicitly pre-checking, and
update references to requestDTO.getKeyName() in saveImage to use isBlank() and
to throw rather than return null.
- Around line 86-90: The current catch for GeneralException in the batch loop
(around saveImage and downstream calls like imageRepository.save) is too narrow
and will let unexpected runtime exceptions abort the whole batch; change the
handler to catch Exception (or at least RuntimeException) instead of
GeneralException so failures are logged and the loop continues, keeping the
existing log.warn call (including dto.getKeyName(), memberId and
ex.getMessage()) and any DLQ/notification handling; ensure you update the catch
block that currently references GeneralException to reference Exception (or
RuntimeException) and preserve existing logging behavior.
In `@src/main/java/cc/backend/kafka/event/common/DomainEvent.java`:
- Around line 1-10: Remove the unnecessary Java serialization coupling from the
DomainEvent interface: delete the "extends Serializable" clause from the
DomainEvent interface declaration so it no longer implements
java.io.Serializable; keep the existing getEventType() method signature as-is
(DomainEvent and DomainEventType) and, if desired later, add an optional default
metadata method such as Instant occurredAt() to the DomainEvent interface for
observability, but do not reintroduce Serializable.
In
`@src/main/java/cc/backend/kafka/event/reservationCompletedEvent/ReservationCompletedConsumer.java`:
- Line 3: Remove the unused import RejectShowEvent from
ReservationCompletedConsumer; locate the import statement "import
cc.backend.kafka.event.rejectShowEvent.RejectShowEvent;" at the top of the
ReservationCompletedConsumer class and delete it so only actually used imports
remain.
In `@src/main/java/cc/backend/kafka/KafkaConfig.java`:
- Around line 88-95: The highThroughputFactory bean (method
highThroughputFactory) is unused; either remove this
ConcurrentKafkaListenerContainerFactory<String, DomainEvent> bean entirely or
document its intended purpose and target topics and update any `@KafkaListener` to
reference containerFactory = "highThroughputFactory"; if you keep it, add a
clear comment above highThroughputFactory explaining which topic(s)/listeners
should use it and why (e.g., high-partition topics, concurrency=3) and ensure
its dependencies (consumerFactory() and errorHandler(kafkaTemplate)) are correct
and referenced elsewhere.
- Around line 49-63: The JsonDeserializer in consumerFactory currently calls
addTrustedPackages("*") which trusts all packages; change this to whitelist only
your domain event package(s) (e.g., use DomainEvent.class.getPackageName() or
the explicit package string for your domain events) so only your domain classes
are trusted during deserialization; update the call on the JsonDeserializer
instance (the one named deserializer) to use the specific package name(s)
instead of "*" and leave the rest of the consumerFactory configuration intact.
In `@src/main/java/cc/backend/kafka/TopicConfig.java`:
- Around line 9-17: TopicConfig currently only defines approvalShowTopic
(approvalShowTopic()) but the application uses five other topics
(hot-board-topic, comment-created-topic, reply-created-topic,
reservation-completed-topic, reject-show-topic) which must be declared as
NewTopic beans to avoid runtime failures when auto.create.topics.enable is
false; add NewTopic `@Bean` methods for each missing topic (e.g., hotBoardTopic(),
commentCreatedTopic(), replyCreatedTopic(), reservationCompletedTopic(),
rejectShowTopic()) using
TopicBuilder.name(...).partitions(1).replicas(1).build(), and replace hard-coded
names by extracting topic name constants or binding them to application.yml
properties so producers/consumers and TopicConfig reference the same
identifiers.
In `@src/main/java/cc/backend/memberLike/repository/MemberLikeRepository.java`:
- Around line 45-47: The current unbounded method findAllDistinctMembers() loads
every distinct liker into memory; change its signature to a paginated or
cursor-based variant (e.g., Page<Member> or Slice<Member>
findAllDistinctMembers(Pageable pageable)) and keep the same JPQL ("SELECT
DISTINCT ml.liker FROM MemberLike ml") so the DB returns a page/slice instead of
a full List; update callers (recommendation batch/process code) to iterate pages
(or use a streaming/batch cursor approach) and tune page size to limit memory
usage per batch.
In
`@src/main/java/cc/backend/notice/service/eventListener/RejectCommitEventListener.java`:
- Around line 1-27: The project is missing `@EnableAsync` on the Spring
configuration so `@Async` on listeners like
RejectCommitEventListener.onRejectCommit is ignored; open the main configuration
class BackendApplication (the SpringBootApplication class), add the `@EnableAsync`
annotation (and its import
org.springframework.scheduling.annotation.EnableAsync) alongside the existing
`@SpringBootApplication/`@EnableJpaAuditing/@EnableScheduling so that `@Async` on
methods such as RejectCommitEventListener.onRejectCommit is actually enabled and
listeners run asynchronously after commit.
In `@src/main/java/cc/backend/notice/service/NoticeServiceImpl.java`:
- Around line 275-278: In NoticeServiceImpl there is inconsistent hashtag
parsing between notifyRecommendation and shouldRecommendToMember—create a
private helper method parseHashtags(String) that uses the same split regex as
the first instance (e.g. split on "[#,\\s]+"), trims tokens, filters out empty
strings, and returns a Set<String>; then replace the inline parsing in both
notifyRecommendation and shouldRecommendToMember with calls to
parseHashtags(show.getHashtag()) so both use identical logic and produce
comparable sets for intersection.
- Around line 263-268: The class NoticeServiceImpl is annotated with
`@Transactional`(readOnly = true) so methods that perform writes are running in
read-only transactions; add method-level `@Transactional` (no readOnly) to
notifyRecommendation, notifyApproval, and notifyLikers to enable save/saveAll
operations to commit, and add the missing `@Override` annotations to each method
to reflect they implement NoticeService; locate the methods
notifyRecommendation, notifyApproval, and notifyLikers in NoticeServiceImpl and
annotate them with `@Transactional` and `@Override`.
In `@src/main/resources/application.yml`:
- Around line 49-50: The configuration currently sets
spring.json.trusted.packages to "*" which allows JsonDeserializer to instantiate
any class and opens deserialization attacks; change the property value to a
comma-separated list of only your application's event/domain packages (e.g.,
com.example.events, com.example.domain) so JsonDeserializer only trusts those
packages, and verify the consumer configuration/class that uses JsonDeserializer
is loading the updated spring.json.trusted.packages value.
🧹 Nitpick comments (20)
src/main/java/cc/backend/memberLike/repository/MemberLikeRepository.java (1)
41-43: Explicit@Queryis redundant here.Spring Data JPA can derive this query automatically from the method name
findByLikerId(resolves toliker.id). The@Queryannotation and@Paramcan be removed, leaving just:List<MemberLike> findByLikerId(Long memberId);src/main/java/cc/backend/board/service/CommentService.java (1)
14-22: Unused Kafka imports remain after refactoring to commit-event pattern.
CommentEvent,CommentProducer,ReplyEvent,ReplyProducer(lines 14–17), andTicketReservationCommitEvent(line 22) are imported but never referenced in this file. These appear to be leftovers from development. Removing them keeps the file clean and avoids confusion about whether Kafka producers are used directly here.🧹 Proposed cleanup
-import cc.backend.kafka.event.commentEvent.CommentEvent; -import cc.backend.kafka.event.commentEvent.CommentProducer; -import cc.backend.kafka.event.replyEvent.ReplyEvent; -import cc.backend.kafka.event.replyEvent.ReplyProducer; import cc.backend.member.entity.Member; import cc.backend.member.repository.MemberRepository; import cc.backend.notice.event.CommentCommitEvent; import cc.backend.notice.event.ReplyCommitEvent; -import cc.backend.notice.event.TicketReservationCommitEvent;build.gradle (1)
74-76: Consider addingspring-kafka-testfor testing Kafka components.The Kafka dependency addition is clean and correctly leverages the Spring BOM for version management. However, the project currently has no Kafka integration tests despite implementing multiple producers and consumers. Consider adding the test dependency for future integration tests:
+ testImplementation 'org.springframework.kafka:spring-kafka-test'This will enable you to write integration tests using
@EmbeddedKafkaand other Kafka testing utilities.src/main/java/cc/backend/admin/amateurShow/service/AdminAmateurShowService.java (1)
12-12: UnusedApplicationEventPublisherimport remains.
ApplicationEventPublisheris imported but no longer used in this class — the event-publishing field was removed. This will cause a compiler warning.Proposed fix
-import org.springframework.context.ApplicationEventPublisher;src/main/java/cc/backend/ticket/service/TempTicketServiceImpl.java (1)
11-12: Unused imports:TicketReservationCommitEventandReservationCompletedProducer.Both are imported but neither is referenced in any method body. Once the event publishing logic is restored (per the comment above), these may become used — but currently they are dead code.
src/main/java/cc/backend/kafka/event/rejectShowEvent/RejectShowProducer.java (1)
17-26: Consider handling theKafkaTemplate.send()result for observability.All producers in this PR fire-and-forget the
CompletableFuturereturned bykafkaTemplate.send(). If the broker is unreachable or the send fails, the error is silently swallowed. At minimum, attaching a callback to log failures would improve debuggability in production.This applies equally to all other producers (
ApprovalShowProducer,HotBoardProducer,CommentProducer,ReplyProducer,ReservationCompletedProducer).Example: add error logging
kafkaTemplate.send( TOPIC, event.amateurShowId().toString(), event - ); + ).whenComplete((result, ex) -> { + if (ex != null) { + log.error("Failed to publish RejectShowEvent for showId={}", event.amateurShowId(), ex); + } + });docker-compose.yml (2)
56-56: Pin Kafka and Kafka UI image versions to avoid unexpected breakage.Using
:latestforconfluentinc/cp-kafkaandprovectuslabs/kafka-uimeans any upstream release could introduce breaking changes or incompatibilities. Pin to a specific version (e.g.,confluentinc/cp-kafka:7.6.0).
97-100: Minor inconsistency:kafka_datavolume missing explicitdriver: local.
redis_dataspecifiesdriver: localwhilekafka_datadoes not. Whilelocalis the default, being explicit keeps the config consistent and self-documenting.Suggested fix
kafka_data: + driver: localsrc/main/java/cc/backend/notice/entity/MemberNotice.java (1)
29-31: Consider adding@Columnconstraints onpersonalMsg.The
personalMsgfield defaults to a nullableVARCHAR(255). If there's a maximum length requirement or if it should be non-nullable in certain flows, adding an explicit@Column(length = ..., nullable = ...)annotation would make the schema more intentional.src/main/resources/application.yml (1)
45-51: Consider addingisolation.level: read_committedfor transactional consistency.The PR mentions adding transactional settings to Kafka consumers (and consumers are annotated with
@Transactional). If producers ever use Kafka transactions, consumers withoutread_committedisolation will see uncommitted messages. Adding this proactively avoids subtle ordering/visibility bugs.src/main/java/cc/backend/kafka/event/replyEvent/ReplyConsumer.java (1)
15-26: Consider adding error handling for poison-pill messages.If
notifyNewReplythrows a persistent exception (e.g., referenced entity not found), the default retry behavior will re-deliver the same message indefinitely. Consider configuring aDefaultErrorHandlerwith aDeadLetterPublishingRecovererat the container factory level, or adding a try-catch with logging here to prevent infinite retry loops.This applies to all consumers in this PR (CommentConsumer, ApprovalConsumer, etc.).
src/main/java/cc/backend/notice/repository/NoticeRepository.java (1)
10-11: Use primitivebooleanreturn type forexistsByquery.Spring Data JPA
existsBy...derived queries always return a non-null boolean. Using theBooleanwrapper is unnecessary and inconsistent with theBoolean→booleanchange applied toMemberNotice.isReadin this same PR.Suggested fix
- Boolean existsByContentIdAndType(Long contentId, NoticeType type); + boolean existsByContentIdAndType(Long contentId, NoticeType type);src/main/java/cc/backend/kafka/event/hotBoardEvent/HotBoardProducer.java (1)
14-23:KafkaTemplate.send()result is silently discarded — message loss will go unnoticed.
kafkaTemplate.send()returns aCompletableFuture<SendResult>. Ignoring it means send failures (broker down, serialization error, buffer full) are silently swallowed. At minimum, attach a callback to log failures. This applies to all producers in this PR (ApprovalShowProducer, CommentProducer, ReplyProducer, RejectShowProducer, ReservationCompletedProducer).Example fix
- kafkaTemplate.send( - TOPIC, - event.boardId().toString(), - event - ); + kafkaTemplate.send( + TOPIC, + event.boardId().toString(), + event + ).whenComplete((result, ex) -> { + if (ex != null) { + log.error("Failed to publish HotBoardEvent for boardId={}", event.boardId(), ex); + } + });src/main/java/cc/backend/kafka/event/commentEvent/CommentProducer.java (1)
15-24: Consider handling theKafkaTemplate.send()result for observability.
kafkaTemplate.send()returns aCompletableFuture. If the send fails (e.g., broker unavailable, serialization error), the failure is silently swallowed. This applies to all producers in this PR (CommentProducer, HotBoardProducer, ReplyProducer, ApprovalShowProducer, RejectShowProducer, ReservationCompletedProducer).At minimum, attach a callback to log failures so you have observability into lost messages:
♻️ Example
kafkaTemplate.send( TOPIC, event.boardId().toString(), event - ); + ).whenComplete((result, ex) -> { + if (ex != null) { + log.error("Failed to publish to {}: {}", TOPIC, ex.getMessage(), ex); + } + });(Requires adding
@Slf4jto the class.)src/main/java/cc/backend/kafka/event/reservationCompletedEvent/ReservationCompletedConsumer.java (1)
12-12: Class visibility is package-private, unlike all other consumers which arepublic.While Spring can still instantiate package-private
@Componentbeans, this is inconsistent with every other consumer in the PR (HotBoardConsumer,CommentConsumer,ReplyConsumer,ApprovalConsumer,RejectShowConsumerare allpublic class).-class ReservationCompletedConsumer { +public class ReservationCompletedConsumer {src/main/java/cc/backend/kafka/event/commentEvent/CommentConsumer.java (1)
3-15: Extensive unused imports.Lines 3–15 import
ErrorStatus,GeneralException,Board,Comment,BoardRepository,CommentRepository,Member,MemberRepository,MemberNotice,Notice,NoticeType,MemberNoticeRepository, andNoticeRepository— none of which are referenced in this class. These appear to be remnants of an earlier implementation. Cleaning them up would reduce noise.♻️ Remove unused imports
package cc.backend.kafka.event.commentEvent; -import cc.backend.apiPayLoad.code.status.ErrorStatus; -import cc.backend.apiPayLoad.exception.GeneralException; -import cc.backend.board.entity.Board; -import cc.backend.board.entity.Comment; -import cc.backend.board.repository.BoardRepository; -import cc.backend.board.repository.CommentRepository; -import cc.backend.member.entity.Member; -import cc.backend.member.repository.MemberRepository; -import cc.backend.notice.entity.MemberNotice; -import cc.backend.notice.entity.Notice; -import cc.backend.notice.entity.enums.NoticeType; -import cc.backend.notice.repository.MemberNoticeRepository; -import cc.backend.notice.repository.NoticeRepository; import cc.backend.notice.service.NoticeService; import lombok.RequiredArgsConstructor; import org.springframework.kafka.annotation.KafkaListener;src/main/java/cc/backend/kafka/event/reservationCompletedEvent/ReservationCompletedProducer.java (1)
17-26:kafkaTemplate.send()result is ignored — silent event loss on failure.
KafkaTemplate.send()returns aCompletableFuture<SendResult>. If the broker is unreachable or serialization fails, the error is silently swallowed because the returned future is never observed. This applies to all producers in this PR (Comment, Reply, HotBoard, ApprovalShow, RejectShow, ReservationCompleted).At minimum, attach a callback to log failures so you have observability into lost events:
♻️ Add error logging on send failure
public void publish(ReservationCompletedEvent event) { if (event == null) return; // memberId 기준 파티션 - kafkaTemplate.send( + kafkaTemplate.send( TOPIC, event.memberId().toString(), event - ); + ).whenComplete((result, ex) -> { + if (ex != null) { + log.error("Failed to publish ReservationCompletedEvent to {}: {}", TOPIC, ex.getMessage(), ex); + } + }); }src/main/java/cc/backend/kafka/KafkaConfig.java (1)
65-76: DLQ + retry setup is reasonable for a starter configuration.One thing to be aware of: the DLQ topic (e.g.,
comment-created-topic-dlq) is auto-created by theDeadLetterPublishingRecovererifauto.create.topics.enableis true on the broker. Ensure that's the intended behavior, or pre-create the DLQ topics inTopicConfigalongside the source topics for explicit partition/replica control.src/main/java/cc/backend/notice/service/NoticeService.java (1)
12-13:@Serviceon an interface is unconventional.
@Serviceis typically placed on the concrete implementation (NoticeServiceImpl), not the interface. While Spring tolerates this, it can confuse readers and some tooling. If the implementation already carries@Service, this annotation here is redundant.src/main/java/cc/backend/notice/service/NoticeServiceImpl.java (1)
294-320: N+1 query problem in recommendation loop.
findAllDistinctMembers()returns every member who has ever liked anything. For each member,shouldRecommendToMemberissues at least two additional queries (findByLikerId+findHashtagsByMemberIdper liked performer). With a growing user base this becomes O(members × likes × performers) in DB round-trips.Consider pushing the tag-matching logic into a single repository query (e.g., find members who liked performers whose shows share any of the given hashtags), or at minimum pre-fetch the performer→hashtags mapping once before the loop.
Also applies to: 336-356
| kafka: | ||
| image: confluentinc/cp-kafka:latest | ||
| container_name: kafka | ||
| ports: | ||
| - "29092:29092" # 로컬호스트 접속용 | ||
| environment: | ||
| # 필수 KRaft 설정 | ||
| CLUSTER_ID: "hjeeg3q1SoCw7IKoRw-rMQ" | ||
| KAFKA_NODE_ID: 1 | ||
| KAFKA_PROCESS_ROLES: "broker,controller" # 브로커와 컨트롤러 역할 | ||
| KAFKA_CONTROLLER_QUORUM_VOTERS: "1@kafka:9093" # 컨트롤러 지정 | ||
|
|
||
| # 리스너 설정 (CONTROLLER 추가 필수) | ||
| KAFKA_LISTENERS: 'PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093,PLAINTEXT_HOST://0.0.0.0:29092' | ||
| KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092' | ||
| KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' | ||
| KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER' | ||
|
|
||
| # Mac(ARM) 호환 설정 및 성능 최적화 | ||
| _JAVA_OPTIONS: "-XX:UseSVE=0" | ||
| KAFKA_HEAP_OPTS: "-Xms256M -Xmx256M" # JVM 힙 메모리 | ||
| KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 | ||
| KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 | ||
| KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 | ||
| KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 | ||
| volumes: | ||
| - kafka_data:/var/lib/kafka/data # <- Docker 내부 볼륨만 사용 | ||
| restart: unless-stopped |
There was a problem hiding this comment.
Kafka service lacks a healthcheck, making depends_on unreliable.
The app containers (app-blue, app-green) depend on kafka, but without a healthcheck defined on the kafka service, Docker Compose only waits for the container to start, not for Kafka to be ready. This can cause transient connection failures on application startup.
Consider adding a healthcheck and using depends_on with condition: service_healthy:
🛠️ Suggested healthcheck for kafka
volumes:
- kafka_data:/var/lib/kafka/data
restart: unless-stopped
+ healthcheck:
+ test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092 || exit 1"]
+ interval: 15s
+ timeout: 10s
+ retries: 5
+ start_period: 30sThen update the dependents:
depends_on:
- - redis
- - kafka
+ redis:
+ condition: service_started
+ kafka:
+ condition: service_healthy🤖 Prompt for AI Agents
In `@docker-compose.yml` around lines 55 - 82, Add a Docker healthcheck to the
kafka service so Compose can wait for Kafka to be ready (not just started) and
update the dependents to wait on service health; specifically, under the kafka
service (the one with KAFKA_LISTENERS/KAFKA_ADVERTISED_LISTENERS env vars) add a
healthcheck that probes the broker (e.g., attempts a TCP connect to the
PLAINTEXT listener or runs a lightweight Kafka client check) with sensible
interval, timeout, retries and start_period settings, and then modify the
depends_on entries for app-blue and app-green to use condition: service_healthy
so they only start when kafka is healthy.
| @Query("SELECT s.hashtag FROM AmateurShow s WHERE s.member.id = :memberId") | ||
| List<String> findHashtagsByMemberId(@Param("memberId") Long memberId); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find and examine the AmateurShow entity class
fd -e java 'AmateurShow\.java' --exclude '*Repository*' --exclude '*test*'Repository: SeeATheater/CC_Backend
Length of output: 126
🏁 Script executed:
#!/bin/bash
# Search for hashtag field in AmateurShow entity
rg -n 'private.*hashtag|@Column.*hashtag|@NotNull.*hashtag' --type=java -B 2 -A 2Repository: SeeATheater/CC_Backend
Length of output: 2781
🏁 Script executed:
#!/bin/bash
# Check how findHashtagsByMemberId is used
rg -n 'findHashtagsByMemberId' --type=java -B 3 -A 5Repository: SeeATheater/CC_Backend
Length of output: 1738
🏁 Script executed:
#!/bin/bash
# Get the full AmateurShow entity class to check field annotations
head -60 src/main/java/cc/backend/amateurShow/entity/AmateurShow.java | tail -20Repository: SeeATheater/CC_Backend
Length of output: 406
🏁 Script executed:
#!/bin/bash
# Check if there are other usages of findHashtagsByMemberId that might also be affected
rg -n 'findHashtagsByMemberId' --type=java -C 10Repository: SeeATheater/CC_Backend
Length of output: 3928
Add null check to prevent NullPointerException in downstream recommendation logic.
The hashtag field in the AmateurShow entity is nullable, so this query can return null elements in the list. At line 346 in NoticeServiceImpl, the code calls .split("#") directly on each element without null checking, which will throw NullPointerException if any AmateurShow record has a null hashtag.
Suggested fix
- `@Query`("SELECT s.hashtag FROM AmateurShow s WHERE s.member.id = :memberId")
+ `@Query`("SELECT s.hashtag FROM AmateurShow s WHERE s.member.id = :memberId AND s.hashtag IS NOT NULL")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Query("SELECT s.hashtag FROM AmateurShow s WHERE s.member.id = :memberId") | |
| List<String> findHashtagsByMemberId(@Param("memberId") Long memberId); | |
| `@Query`("SELECT s.hashtag FROM AmateurShow s WHERE s.member.id = :memberId AND s.hashtag IS NOT NULL") | |
| List<String> findHashtagsByMemberId(`@Param`("memberId") Long memberId); |
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/amateurShow/repository/AmateurShowRepository.java`
around lines 88 - 89, The query method findHashtagsByMemberId can return null
elements because AmateurShow.hashtag is nullable; update the query to exclude
nulls (e.g., add "AND s.hashtag IS NOT NULL") or modify the service path by
filtering nulls before splitting in NoticeServiceImpl (where it currently calls
.split("#") on each element) so no null value is passed to split; locate
findHashtagsByMemberId and the splitting logic in NoticeServiceImpl and apply
one of these fixes to prevent NullPointerException.
| if (requestDTO.getKeyName() == null || requestDTO.getKeyName().isEmpty()) { | ||
| return null; // 빈 DTO 무시 | ||
| } |
There was a problem hiding this comment.
Returning null from a public @Transactional method is fragile.
Any caller of saveImage other than saveImages will receive null without warning, likely causing an NPE downstream. Consider throwing an IllegalArgumentException (or your GeneralException) instead, and let saveImages be the only place that tolerates missing key names via its own pre-check or catch block.
Also, isEmpty() misses whitespace-only strings. Line 198 (updateShowImage) already uses isBlank() for the same kind of guard — prefer consistency.
Suggested approach
- if (requestDTO.getKeyName() == null || requestDTO.getKeyName().isEmpty()) {
- return null; // 빈 DTO 무시
- }
+ if (requestDTO.getKeyName() == null || requestDTO.getKeyName().isBlank()) {
+ throw new GeneralException(ErrorStatus.IMAGE_NOT_FOUND); // or a more specific status
+ }Then in saveImages, the existing catch (GeneralException ex) will handle it naturally.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (requestDTO.getKeyName() == null || requestDTO.getKeyName().isEmpty()) { | |
| return null; // 빈 DTO 무시 | |
| } | |
| if (requestDTO.getKeyName() == null || requestDTO.getKeyName().isBlank()) { | |
| throw new GeneralException(ErrorStatus.IMAGE_NOT_FOUND); // or a more specific status | |
| } |
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/image/service/ImageService.java` around lines 41 -
43, The saveImage method currently returns null when requestDTO.getKeyName() is
missing, which is fragile; instead validate using isBlank() (consistent with
updateShowImage) and throw a clear exception (IllegalArgumentException or your
GeneralException) when keyName is blank; keep saveImages as the sole caller that
tolerates missing key names by relying on its existing catch (GeneralException
ex) handling or by explicitly pre-checking, and update references to
requestDTO.getKeyName() in saveImage to use isBlank() and to throw rather than
return null.
| } catch (GeneralException ex) { | ||
| // 실패한 이미지는 로깅, 알림, DLQ 등 처리 가능 | ||
| log.warn("이미지 저장 실패: keyName={}, memberId={}, reason={}", | ||
| dto.getKeyName(), memberId, ex.getMessage()); | ||
| } |
There was a problem hiding this comment.
Catching only GeneralException leaves the batch vulnerable to unexpected failures.
If saveImage (or any downstream call like imageRepository.save) throws a non-GeneralException runtime exception (e.g., DataAccessException, AmazonServiceException), it will propagate uncaught, aborting the entire batch and rolling back all previously saved images in this transaction.
Widen the catch to Exception (or at least RuntimeException) to match the stated intent of "log and continue."
Suggested fix
- } catch (GeneralException ex) {
- // 실패한 이미지는 로깅, 알림, DLQ 등 처리 가능
+ } catch (Exception ex) {
log.warn("이미지 저장 실패: keyName={}, memberId={}, reason={}",
dto.getKeyName(), memberId, ex.getMessage());
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (GeneralException ex) { | |
| // 실패한 이미지는 로깅, 알림, DLQ 등 처리 가능 | |
| log.warn("이미지 저장 실패: keyName={}, memberId={}, reason={}", | |
| dto.getKeyName(), memberId, ex.getMessage()); | |
| } | |
| } catch (Exception ex) { | |
| log.warn("이미지 저장 실패: keyName={}, memberId={}, reason={}", | |
| dto.getKeyName(), memberId, ex.getMessage()); | |
| } |
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/image/service/ImageService.java` around lines 86 -
90, The current catch for GeneralException in the batch loop (around saveImage
and downstream calls like imageRepository.save) is too narrow and will let
unexpected runtime exceptions abort the whole batch; change the handler to catch
Exception (or at least RuntimeException) instead of GeneralException so failures
are logged and the loop continues, keeping the existing log.warn call (including
dto.getKeyName(), memberId and ex.getMessage()) and any DLQ/notification
handling; ensure you update the catch block that currently references
GeneralException to reference Exception (or RuntimeException) and preserve
existing logging behavior.
| package cc.backend.kafka.event.common; | ||
|
|
||
| import cc.backend.kafka.event.common.enums.DomainEventType; | ||
|
|
||
| import java.io.Serializable; | ||
|
|
||
| public interface DomainEvent extends Serializable { | ||
| DomainEventType getEventType(); | ||
|
|
||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find Kafka configuration files and serializer/deserializer setup
find . -type f \( -name "*.java" -o -name "*.yml" -o -name "*.yaml" -o -name "*.properties" \) | head -30Repository: SeeATheater/CC_Backend
Length of output: 2216
🏁 Script executed:
# Search for Kafka configuration and serializer/deserializer patterns
rg -i "serializer|deserializer|JsonSerializer|JsonDeserializer|Serdes" --type java --type yaml -n | head -50Repository: SeeATheater/CC_Backend
Length of output: 2929
🏁 Script executed:
# Check for KafkaConfig or similar configuration classes
fd -e java -type f | xargs rg -l "KafkaConfig|@Configuration.*Kafka" 2>/dev/null | head -10Repository: SeeATheater/CC_Backend
Length of output: 301
🏁 Script executed:
# Read KafkaConfig to understand the full setup
cat -n src/main/java/cc/backend/kafka/KafkaConfig.javaRepository: SeeATheater/CC_Backend
Length of output: 5011
🏁 Script executed:
# Find all domain event implementations
find . -path "*/event/*" -name "*.java" -type f | xargs rg -l "implements DomainEvent|extends.*Event" | head -20Repository: SeeATheater/CC_Backend
Length of output: 525
🏁 Script executed:
# Check if there are any domain events with metadata
rg -A 5 "class.*Event.*implements DomainEvent" --type java | head -40Repository: SeeATheater/CC_Backend
Length of output: 48
🏁 Script executed:
# Check one of the domain events to see its structure
cat -n src/main/java/cc/backend/kafka/event/commentEvent/CommentEvent.javaRepository: SeeATheater/CC_Backend
Length of output: 646
🏁 Script executed:
# Check another event to see if there's any metadata pattern
cat -n src/main/java/cc/backend/kafka/event/reservationCompletedEvent/ReservationCompletedEvent.javaRepository: SeeATheater/CC_Backend
Length of output: 630
🏁 Script executed:
# Look for any usage of java.io.Serializable or serialVersionUID in events
rg -n "Serializable|serialVersionUID" src/main/java/cc/backend/kafka/event/ --type javaRepository: SeeATheater/CC_Backend
Length of output: 272
Remove the Serializable interface — it's not needed for JSON serialization.
Your Kafka configuration uses JsonSerializer/JsonDeserializer (confirmed in KafkaConfig.java and application.yml), which doesn't require Serializable. Since you're not using Java serialization, extending Serializable adds unnecessary surface area without benefit. Records are serializable by default for JSON operations.
Optionally, consider adding event metadata like Instant occurredAt() as a default method for production observability and debugging. This is non-blocking and can be added later.
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/kafka/event/common/DomainEvent.java` around lines 1
- 10, Remove the unnecessary Java serialization coupling from the DomainEvent
interface: delete the "extends Serializable" clause from the DomainEvent
interface declaration so it no longer implements java.io.Serializable; keep the
existing getEventType() method signature as-is (DomainEvent and DomainEventType)
and, if desired later, add an optional default metadata method such as Instant
occurredAt() to the DomainEvent interface for observability, but do not
reintroduce Serializable.
| // 모든 회원을 대상으로, 추천 검사를 위해 distinct Member만 가져오기 | ||
| @Query("SELECT DISTINCT ml.liker FROM MemberLike ml") | ||
| List<Member> findAllDistinctMembers(); |
There was a problem hiding this comment.
Unbounded query loads all distinct likers into memory.
findAllDistinctMembers() returns every distinct member who has ever liked someone. As the user base grows, this becomes a significant memory and performance concern — especially if called periodically for recommendation checks (as the PR objectives suggest).
Consider using pagination (Slice<Member> / Page<Member> with Pageable) or a cursor/batch approach to process members incrementally.
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/memberLike/repository/MemberLikeRepository.java`
around lines 45 - 47, The current unbounded method findAllDistinctMembers()
loads every distinct liker into memory; change its signature to a paginated or
cursor-based variant (e.g., Page<Member> or Slice<Member>
findAllDistinctMembers(Pageable pageable)) and keep the same JPQL ("SELECT
DISTINCT ml.liker FROM MemberLike ml") so the DB returns a page/slice instead of
a full List; update callers (recommendation batch/process code) to iterate pages
(or use a streaming/batch cursor approach) and tune page size to limit memory
usage per batch.
src/main/java/cc/backend/notice/service/eventListener/RejectCommitEventListener.java
Show resolved
Hide resolved
| properties: | ||
| spring.json.trusted.packages: "*" |
There was a problem hiding this comment.
Restrict spring.json.trusted.packages to your application's event packages.
Setting trusted packages to "*" allows the JsonDeserializer to instantiate any Java class specified in the message type header. If an attacker can produce a message to any topic this consumer subscribes to, this opens the door to deserialization-based attacks.
Restrict it to your domain event packages:
🔒 Suggested fix
properties:
- spring.json.trusted.packages: "*"
+ spring.json.trusted.packages: "cc.backend.kafka.event"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| properties: | |
| spring.json.trusted.packages: "*" | |
| properties: | |
| spring.json.trusted.packages: "cc.backend.kafka.event" |
🤖 Prompt for AI Agents
In `@src/main/resources/application.yml` around lines 49 - 50, The configuration
currently sets spring.json.trusted.packages to "*" which allows JsonDeserializer
to instantiate any class and opens deserialization attacks; change the property
value to a comma-separated list of only your application's event/domain packages
(e.g., com.example.events, com.example.domain) so JsonDeserializer only trusts
those packages, and verify the consumer configuration/class that uses
JsonDeserializer is loading the updated spring.json.trusted.packages value.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/main/java/cc/backend/config/AsyncConfig.java`:
- Around line 15-23: The ThreadPoolTaskExecutor is configured with
executor.setCorePoolSize(0) which causes most tasks to be enqueued and run
effectively single-threaded until the queue fills; change the core pool size to
a small positive number (e.g., 2) so background work runs with modest
concurrency, and ensure executor.setMaxPoolSize remains >= that value and
queueCapacity is tuned accordingly; update the configuration where
ThreadPoolTaskExecutor is instantiated (look for ThreadPoolTaskExecutor,
executor.setCorePoolSize, executor.setMaxPoolSize, executor.setQueueCapacity) to
set corePoolSize to 2 (or another small number) and validate
maxPoolSize/queueCapacity balance.
| ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); | ||
|
|
||
| executor.setCorePoolSize(0); // 최소 스레드 없음, 필요할 때만 생성 | ||
| executor.setMaxPoolSize(5); // 동시에 실행할 최대 스레드 수 | ||
| executor.setQueueCapacity(50); // 큐에 대기할 수 있는 작업 수 | ||
|
|
||
| executor.setThreadNamePrefix("Async-"); | ||
| executor.initialize(); | ||
| return executor; |
There was a problem hiding this comment.
corePoolSize(0) effectively serializes async work behind a single thread.
With corePoolSize=0 and queueCapacity=50, ThreadPoolExecutor semantics are: tasks are enqueued first, and only one worker is lazily created (as a safety net when workerCount == 0 after enqueue). Additional threads (up to maxPoolSize=5) are spawned only when the queue is full (50 pending tasks). In practice, this means async Kafka publishes will run mostly single-threaded, adding latency under moderate load.
A small corePoolSize (e.g., 2) would allow reasonable concurrency while remaining conservative for small AWS instances.
Suggested fix
- executor.setCorePoolSize(0); // 최소 스레드 없음, 필요할 때만 생성
+ executor.setCorePoolSize(2); // 최소 스레드 수 (경량 인스턴스 고려)🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/config/AsyncConfig.java` around lines 15 - 23, The
ThreadPoolTaskExecutor is configured with executor.setCorePoolSize(0) which
causes most tasks to be enqueued and run effectively single-threaded until the
queue fills; change the core pool size to a small positive number (e.g., 2) so
background work runs with modest concurrency, and ensure executor.setMaxPoolSize
remains >= that value and queueCapacity is tuned accordingly; update the
configuration where ThreadPoolTaskExecutor is instantiated (look for
ThreadPoolTaskExecutor, executor.setCorePoolSize, executor.setMaxPoolSize,
executor.setQueueCapacity) to set corePoolSize to 2 (or another small number)
and validate maxPoolSize/queueCapacity balance.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/main/java/cc/backend/notice/service/NoticeServiceImpl.java`:
- Around line 265-268: The existsByContentIdAndType check in NoticeServiceImpl
(used when handling ApprovalShowEvent and NoticeType.RECOMMEND) has a TOCTOU
race: two concurrent consumers can both pass the check and then call
noticeRepository.save, creating duplicates. Add a unique DB constraint on
(contentId, type) for the Notice entity and remove reliance on the pre-check;
instead attempt to save and catch the data-integrity/constraint-violation
exception (or translate it into your domain exception) to ignore duplicate
inserts, or alternatively acquire a distributed lock keyed by contentId before
checking/saving. Ensure the save call in NoticeServiceImpl is protected by one
of these approaches and handle the constraint exception path to return
null/ignore duplicates.
- Around line 290-317: The loop over members using
memberLikeRepository.findAllDistinctMembers combined with per-member calls in
shouldRecommendToMember (which calls findByLikerId and findHashtagsByMemberId)
causes an N+1; refactor by either adding a repository method that returns only
members matching the show/hashtag criteria in one query, or prefetch the needed
mappings before the loop: load all liker→performer relationships
(findByLikerIds) and performer→hashtags (findHashtagsByPerformerIds) into
in-memory maps, then replace shouldRecommendToMember to consult those maps and
build MemberNotice batches as before (using memberNoticeRepository.saveAll and
BATCH_SIZE) to eliminate per-member DB calls.
- Around line 100-102: The Member entity currently uses reference equality so
comparisons in NoticeServiceImpl (e.g., the boardWriter vs commentWriter checks
and the other comparison around lines 193–195) fail; fix this by adding Lombok's
`@EqualsAndHashCode`(of = "id") to the Member class so equals() and hashCode() are
based on the id field, ensuring DB-fetched Member instances with the same id
compare equal and the self-notification guards work correctly.
🧹 Nitpick comments (2)
src/main/java/cc/backend/notice/service/NoticeServiceImpl.java (2)
399-427:notifyLikersdoesn't batch, unlikenotifyRecommendation.
notifyRecommendationcarefully batchesMemberNoticesaves in groups ofBATCH_SIZE, butnotifyLikerscallssaveAllon the entire list at once (line 427). If a performer has many followers, this creates a single large flush. Consider applying the same batching pattern for consistency.
82-137: Duplicate preview-truncation logic — extract a helper.The "truncate to 15 chars +
..." pattern appears at lines 105-107, 197-199, and 201-203. A small private helper liketruncate(String text, int maxLen)would reduce duplication.Proposed helper
private String truncatePreview(String content, int maxLen) { return content.length() > maxLen ? content.substring(0, maxLen) + "..." : content; }Also applies to: 182-227
| if (boardWriter.equals(commentWriter)) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find Member entity and check for equals/hashCode
fd "Member.java" --type f | xargs grep -l "class Member" | head -5 | while read f; do
echo "=== $f ==="
grep -n "equals\|hashCode" "$f"
doneRepository: SeeATheater/CC_Backend
Length of output: 124
🏁 Script executed:
# Also check the NoticeServiceImpl file to see the actual code at lines 100-102 and 193-195
cat -n src/main/java/cc/backend/notice/service/NoticeServiceImpl.java | sed -n '95,105p'Repository: SeeATheater/CC_Backend
Length of output: 544
🏁 Script executed:
# Check lines around 193-195
cat -n src/main/java/cc/backend/notice/service/NoticeServiceImpl.java | sed -n '188,200p'Repository: SeeATheater/CC_Backend
Length of output: 748
🏁 Script executed:
# Read the Member entity file
cat -n src/main/java/cc/backend/member/entity/Member.javaRepository: SeeATheater/CC_Backend
Length of output: 5445
🏁 Script executed:
# Also count lines to understand file size
wc -l src/main/java/cc/backend/member/entity/Member.javaRepository: SeeATheater/CC_Backend
Length of output: 120
🏁 Script executed:
# Find and check BaseEntity
find . -name "BaseEntity.java" -type f | xargs cat -nRepository: SeeATheater/CC_Backend
Length of output: 896
Add @EqualsAndHashCode to Member entity — equals comparison is broken.
Member does not override equals()/hashCode(), so it uses reference equality. Lines 100 and 193 compare Member instances fetched from the database, but different object instances (even with the same ID) will never be equal. The self-notification guards silently fail, allowing a user to receive notifications for their own actions.
Add @EqualsAndHashCode(of = "id") to the Member class to fix this.
Also applies to: lines 193–195
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/notice/service/NoticeServiceImpl.java` around lines
100 - 102, The Member entity currently uses reference equality so comparisons in
NoticeServiceImpl (e.g., the boardWriter vs commentWriter checks and the other
comparison around lines 193–195) fail; fix this by adding Lombok's
`@EqualsAndHashCode`(of = "id") to the Member class so equals() and hashCode() are
based on the id field, ensuring DB-fetched Member instances with the same id
compare equal and the self-notification guards work correctly.
| if (noticeRepository.existsByContentIdAndType( | ||
| event.amateurShowId(), NoticeType.RECOMMEND)) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Race condition on duplicate-recommendation check.
existsByContentIdAndType (line 265) followed by noticeRepository.save (line 281) is a classic TOCTOU issue. If two Kafka consumer threads process the same ApprovalShowEvent concurrently, both may pass the existence check before either saves, resulting in duplicate RECOMMEND notices. Consider adding a unique constraint on (contentId, type) at the DB level and catching the constraint-violation exception, or using a distributed lock.
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/notice/service/NoticeServiceImpl.java` around lines
265 - 268, The existsByContentIdAndType check in NoticeServiceImpl (used when
handling ApprovalShowEvent and NoticeType.RECOMMEND) has a TOCTOU race: two
concurrent consumers can both pass the check and then call
noticeRepository.save, creating duplicates. Add a unique DB constraint on
(contentId, type) for the Notice entity and remove reliance on the pre-check;
instead attempt to save and catch the data-integrity/constraint-violation
exception (or translate it into your domain exception) to ignore duplicate
inserts, or alternatively acquire a distributed lock keyed by contentId before
checking/saving. Ensure the save call in NoticeServiceImpl is protected by one
of these approaches and handle the constraint exception path to return
null/ignore duplicates.
| // 4. 추천 대상 회원 조회 (좋아요한 회원 기준) | ||
| List<Member> allMembers = memberLikeRepository.findAllDistinctMembers(); | ||
| List<MemberNotice> batch = new ArrayList<>(); | ||
|
|
||
| for (Member member : allMembers) { | ||
|
|
||
| //추천 대상이 아닌 멤버는 패스 | ||
| if (!shouldRecommendToMember(member, newTagsSet)) continue; | ||
|
|
||
| /* 5. 개인화 메시지 생성 */ | ||
| String personalMsg = | ||
| "새로운 공연 '" + show.getName() + "' 어떠세요? " | ||
| + show.getHashtag() + " " | ||
| + member.getName() + "님 취향에 딱!"; | ||
|
|
||
| batch.add(MemberNotice.builder() | ||
| .member(member) | ||
| .notice(notice) | ||
| .personalMsg(personalMsg) | ||
| .isRead(false) | ||
| .build() | ||
| ); | ||
|
|
||
| if (batch.size() >= BATCH_SIZE) { | ||
| memberNoticeRepository.saveAll(batch); | ||
| batch.clear(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Severe N+1 query problem in recommendation loop.
findAllDistinctMembers() can return an unbounded number of members. For each member, shouldRecommendToMember issues a query for liked performers (findByLikerId), and for each liked performer, another query for hashtags (findHashtagsByMemberId). This is O(M × P) database queries where M = total distinct members and P = average liked performers per member.
Consider pushing the tag-matching logic into a single query (e.g., a repository method that returns members whose liked performers have shows with matching hashtags), or at minimum pre-fetch the performer→hashtags mapping once before the loop:
Sketch: pre-fetch to eliminate N+1
- List<Member> allMembers = memberLikeRepository.findAllDistinctMembers();
- List<MemberNotice> batch = new ArrayList<>();
-
- for (Member member : allMembers) {
- if (!shouldRecommendToMember(member, newTagsSet)) continue;
+ // Pre-fetch all performer hashtags in one query
+ Map<Long, Set<String>> performerTagsMap = amateurShowRepository.findAllPerformerHashtags()
+ .stream()
+ .collect(Collectors.groupingBy(
+ row -> row.getPerformerId(),
+ Collectors.flatMapping(row -> parseHashtags(row.getHashtag()).stream(), Collectors.toSet())
+ ));
+
+ // Single query: all member-likes with member eagerly fetched
+ List<MemberLike> allLikes = memberLikeRepository.findAllWithMembers();
+ Map<Member, List<MemberLike>> likesByMember = allLikes.stream()
+ .collect(Collectors.groupingBy(MemberLike::getLiker));
+
+ List<MemberNotice> batch = new ArrayList<>();
+ for (Map.Entry<Member, List<MemberLike>> entry : likesByMember.entrySet()) {
+ Member member = entry.getKey();
+ boolean match = entry.getValue().stream()
+ .map(like -> performerTagsMap.getOrDefault(like.getPerformer().getId(), Set.of()))
+ .anyMatch(tags -> !Collections.disjoint(tags, newTagsSet));
+ if (!match) continue;This would reduce hundreds/thousands of queries down to 2-3.
🤖 Prompt for AI Agents
In `@src/main/java/cc/backend/notice/service/NoticeServiceImpl.java` around lines
290 - 317, The loop over members using
memberLikeRepository.findAllDistinctMembers combined with per-member calls in
shouldRecommendToMember (which calls findByLikerId and findHashtagsByMemberId)
causes an N+1; refactor by either adding a repository method that returns only
members matching the show/hashtag criteria in one query, or prefetch the needed
mappings before the loop: load all liker→performer relationships
(findByLikerIds) and performer→hashtags (findHashtagsByPerformerIds) into
in-memory maps, then replace shouldRecommendToMember to consult those maps and
build MemberNotice batches as before (using memberNoticeRepository.saveAll and
BATCH_SIZE) to eliminate per-member DB calls.
#이슈번호,#이슈번호Summary by CodeRabbit
New Features
Bug Fixes
Chores