Compare commits

..

172 커밋

작성자 SHA1 메시지 날짜
77f881a7c1 Merge pull request 'release: 2026-03-27.3 (7건 커밋)' (#121) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m13s
2026-03-27 08:14:56 +09:00
ce55cdd115 docs: 릴리즈 노트 정리 (2026-03-27) 2026-03-27 08:12:44 +09:00
83cd865363 Merge pull request 'feat(batch): 비정상 궤적 포함 저장 플래그 추가 — 강화학습 데이터 수집용' (#120) from feature/include-abnormal-tracks-flag-v2 into develop 2026-03-27 08:07:32 +09:00
44cd532d52 docs: 릴리즈 노트 업데이트 2026-03-27 08:06:53 +09:00
8f784de358 feat(batch): 비정상 궤적 포함 저장 플래그 추가 — 강화학습 데이터 수집용
GPS 스푸핑 등 비정상 운항 패턴의 강화학습 분류기 고도화를 위해,
비정상 궤적을 정상 테이블(5min/hourly/daily)과 캐시(L1/L2)에도
포함 저장하는 설정 플래그 추가.

- vessel.batch.track.include-abnormal-in-tracks 플래그 (기본 false)
- 5min: isAbnormal 시에도 filteredTracks에 포함 (플래그 true)
- Hourly/Daily: correctedTrack null 시에도 originalTrack 포함 (플래그 true)
- 비정상 검출 + t_abnormal_tracks 기록은 플래그와 무관하게 항상 유지
- prod 환경 true 설정 (강화학습 데이터 수집)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 08:06:29 +09:00
6a597958e1 Merge pull request 'fix(metrics): REST API 경로 client_id 수집 누락 수정' (#119) from fix/client-id-rest-api into develop 2026-03-27 08:06:20 +09:00
74aace919b docs: 릴리즈 노트 업데이트 2026-03-27 08:05:47 +09:00
df957be2fe fix(metrics): REST API 경로 client_id 수집 누락 수정
- JWT 쿠키 파싱을 WebSocketStompConfig.extractClientIdFromRequest() static 메서드로 추출
- GisControllerV2 → GisServiceV2 → enqueueRestMetric에 clientId 전달 추가
- WebSocket 경로는 이미 정상 (htlee@gcsc.co.kr 수집 확인)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 08:01:43 +09:00
573f4ff70d Merge pull request 'release: 2026-03-27.2 (4건 커밋)' (#118) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m14s
2026-03-27 07:17:51 +09:00
da6db06dcc docs: 릴리즈 노트 정리 (2026-03-27) 2026-03-27 07:16:56 +09:00
4d4ab5a6bc Merge pull request 'fix(dashboard): Top 클라이언트 IP/ID 토글 및 메트릭 표시 오류 수정' (#117) from fix/dashboard-top-client-display into develop 2026-03-27 07:13:59 +09:00
13c263e649 docs: 릴리즈 노트 업데이트 2026-03-27 07:12:50 +09:00
d19a33b233 fix(dashboard): Top 클라이언트 IP/ID 토글 및 메트릭 표시 오류 수정
- 토글 활성 상태 시각적 구분 강화 (bg-secondary + font-medium)
- IP 모드 "-" 표시 수정 — 백엔드 client 필드명 매핑 보정
- ID 데이터 없을 때 섹션 사라지는 대신 안내 메시지 표시
- 쿼리 이력(ApiMetrics)에 client_id 컬럼 추가
- history SQL에 client_id 컬럼 포함

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 07:04:36 +09:00
99a7f607f7 Merge pull request 'release: 2026-03-27 (5건 커밋)' (#116) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m4s
2026-03-27 06:40:38 +09:00
d6ab622480 docs: 릴리즈 노트 정리 (2026-03-27) 2026-03-27 06:38:22 +09:00
d31eeef193 Merge pull request 'feat: WebSocket 리플레이 캐시 통합 + 쿼리 메트릭 사용자 ID 수집' (#115) from feature/ws-cache-integration-and-client-metrics into develop 2026-03-27 06:36:30 +09:00
296b89327b docs: 릴리즈 노트 업데이트 2026-03-27 06:34:25 +09:00
3333b2cec1 feat(metrics): 쿼리 메트릭 사용자 ID 수집 + 대시보드 IP/ID 토글
GC_SESSION JWT 쿠키에서 인증된 사용자 email을 추출하여 쿼리 메트릭에 기록.
대시보드 Top 클라이언트를 IP 기준 또는 사용자 ID 기준으로 전환 가능.

백엔드:
- WebSocket 핸드셰이크에서 GC_SESSION 쿠키 JWT payload → email 추출
- QueryMetric에 clientId 필드 추가, t_query_metrics에 client_id 컬럼 자동 생성
- timeseries API에 groupBy=ip|id 파라미터 추가

프론트엔드:
- Dashboard Top 클라이언트 섹션에 IP/ID 세그먼트 토글 추가
- 토글 전환 시 즉시 재조회

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 06:27:43 +09:00
8a97321a90 feat(websocket): 리플레이 쿼리 L1/L2 캐시 통합 — DB 의존 제거
WebSocket 리플레이 쿼리가 캐시 범위 내 조회에도 100% DB 경로를 사용하던 문제 수정.

- HOURLY/FIVE_MINUTE 전략에 L1(FiveMinTrackCache)/L2(HourlyTrackCache) 캐시 직접 조회 적용
- currentHourStart 기준 L1/L2 자동 라우팅 (현재시간 정각 이후→L1, 이전→L2)
- 뷰포트 필터를 캐시 데이터에서 직접 수행 (경량 WKT 파싱, JTS 불필요)
- vessel info SQL 컬럼명 오류 수정 (ship_nm → name)
- QueryBenchmark에 cacheHourlyRanges/cacheFiveMinRanges 추가, determinePath 3레벨 캐시 반영
- collectViewportVesselIds에서 HOURLY/5MIN DB 쿼리 제거 (캐시에서 처리)

당일 3시간 쿼리: DB 100% → CACHE 100%, 14일 쿼리: CACHE 100% (L3 범위 내)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 06:27:22 +09:00
22710068d8 Merge pull request 'release: 2026-03-19 (9건 커밋)' (#114) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m8s
2026-03-19 07:48:19 +09:00
aae05be18f docs: 릴리즈 노트 정리 (2026-03-19)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 07:48:01 +09:00
2dab7bf0dc Merge pull request 'chore: AIS API 접속 계정 변경' (#113) from feature/update-ais-api-credentials into develop 2026-03-19 07:47:38 +09:00
99b9391967 docs: 릴리즈 노트 업데이트
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 07:47:19 +09:00
971f7bae11 chore: AIS API 접속 계정 변경
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 07:46:56 +09:00
3b0c09575e fix(ci): 배포 health check 대기 90초→180초 확장 — 기동 타임아웃 실패 대응
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 05:17:23 +09:00
444430d272 Merge pull request 'release: 2026-03-18 (4건 커밋)' (#112) from develop into main
Some checks failed
Build & Deploy / build-and-deploy (push) Failing after 28m38s
2026-03-18 17:09:13 +09:00
3a89354e88 docs: 릴리즈 노트 정리 (2026-03-18)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 17:08:36 +09:00
31ca5b10c3 Merge pull request 'fix: AIS Import Job 스케줄 :15초→:45초 변경 — 빈 응답 방지' (#111) from feature/fix-ais-import-timing into develop 2026-03-18 17:07:08 +09:00
0f2dae72ad docs: 릴리즈 노트 업데이트
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 17:05:56 +09:00
5d537a9c8a fix: AIS Import Job 스케줄 :15초→:45초 변경 — 빈 응답 방지
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 17:05:24 +09:00
796bd09f29 Merge pull request 'release: 2026-03-17.3 (2건 커밋)' (#110) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m44s
2026-03-17 10:02:39 +09:00
5a8159b2cf docs: 릴리즈 노트 정리 (2026-03-17) 2026-03-17 10:02:15 +09:00
0f14991345 feat: recent-positions-detail API + AIS WebClient 버퍼 확장 (#109) 2026-03-17 10:01:53 +09:00
75d3919410 Merge pull request 'release: 2026-03-17.2 (2건 커밋)' (#108) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m32s
2026-03-17 09:29:58 +09:00
6751c84a0b docs: 릴리즈 노트 정리 (2026-03-17) 2026-03-17 09:29:36 +09:00
7d320b24a8 fix: AIS API 계정 롤백 — 신규 계정 응답 없음 (#107) 2026-03-17 09:29:18 +09:00
e571a571df Merge pull request 'release: 2026-03-17 (2건 커밋)' (#106) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 9m38s
2026-03-17 08:41:57 +09:00
d023626eb0 docs: 릴리즈 노트 정리 (2026-03-17) 2026-03-17 08:41:37 +09:00
27515e6452 chore: prod AIS API 접속 계정 변경 (#105) 2026-03-17 08:41:16 +09:00
fa03c7d80d Merge pull request 'release: 2026-03-13 (4건 커밋)' (#104) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 8m46s
2026-03-13 10:20:40 +09:00
345b9c8051 docs: 릴리즈 노트 정리 (2026-03-13) 2026-03-13 10:19:41 +09:00
f405149340 Merge pull request 'feat: 다중구역/STS API 최적화 + ChnPrmShip 전용 필터' (#103) from feature/multi-zone-optimization into develop 2026-03-13 10:18:06 +09:00
60131481f3 docs: 릴리즈 노트 업데이트 2026-03-13 10:13:05 +09:00
c58aaca2ad feat: 다중구역/STS API 최적화 + ChnPrmShip 전용 필터
- AreaSearch/VesselContact 동시성·메모리 관리 통합 (ActiveQueryManager + MemoryBudget)
- 순차 통과 SQL 동적 N-구역(2~10) 확장
- 성능 최적화: ArrayList 사전 할당, Coordinate 재사용, equirectangular 근사
- 3개 API에 chnPrmShipOnly 파라미터 추가 (~1,400 MMSI 필터링)
- 대시보드 DataPipeline 차트 개선
2026-03-13 10:12:22 +09:00
9bd2135337 Merge pull request 'release: 2026-03-10.2 (4건 커밋)' (#102) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 8m30s
2026-03-10 11:20:19 +09:00
29566facb3 docs: 릴리즈 노트 정리 (2026-03-10) 2026-03-10 11:19:10 +09:00
3d1f9631eb Merge pull request 'feat: 쿼리 메트릭 수집 확장 + 대시보드 성능 차트 추가' (#101) from feature/dashboard-metrics-charts into develop 2026-03-10 11:17:33 +09:00
bfaf190b8c docs: 릴리즈 노트 업데이트 2026-03-10 11:16:45 +09:00
7852f840e4 feat: 쿼리 메트릭 수집 확장 + 대시보드 성능 차트 추가
- client IP 수집 (REST: X-Forwarded-For 체인, WS: 세션 속성)
- 응답 크기 추정 (uniqueVessels*200 + points*40)
- timeseries API (/api/monitoring/query-metrics/timeseries)
- Dashboard 쿼리 성능 차트 5종 (응답시간, 볼륨, 캐시경로, 응답크기, Top 클라이언트)
2026-03-10 11:15:00 +09:00
7539441d95 Merge pull request 'release: 2026-03-10 (4건 커밋)' (#100) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 8m33s
2026-03-10 09:21:05 +09:00
02cc4a95b4 docs: 릴리즈 노트 정리 (2026-03-10) 2026-03-10 08:56:57 +09:00
b578879c6a Merge pull request 'feat: API/WS 쿼리 메트릭 이력 조회 기능 구현' (#99) from feature/query-metrics-history into develop 2026-03-10 08:49:37 +09:00
1a0d52911f docs: 릴리즈 노트 업데이트 2026-03-10 08:46:03 +09:00
a0f24d5757 feat: API/WS 쿼리 메트릭 이력 조회 기능 구현
- QueryMetricsBufferService: ConcurrentLinkedQueue + 10초 batch flush
- GisServiceV2: REST API 메트릭 수집 추가
- ChunkedTrackStreamingService: saveAsync → buffer.enqueue 전환
- QueryMetricsController: /history (페이지네이션+필터), /summary (P95 포함)
- ApiMetrics.tsx: 요약카드 + 버튼그룹 필터 + 서버사이드 DataTable + 30s 폴링
- DataTable: server-side pagination props 확장 (하위 호환)
2026-03-10 08:41:56 +09:00
fb1076ac11 Merge pull request 'release: 2026-03-09.2 (4건 커밋)' (#98) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 8m24s
2026-03-09 11:27:08 +09:00
b16ceddf10 docs: 릴리즈 노트 정리 (2026-03-09)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 11:26:12 +09:00
171b35042b Merge pull request 'fix: queryWithCache 단일 소스 응답 소실 버그 수정' (#97) from fix/queryWithCache-clear-bug into develop 2026-03-09 11:22:50 +09:00
2d525ab75a docs: 릴리즈 노트 업데이트
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 11:20:49 +09:00
104f65ad06 fix: queryWithCache 단일 소스 응답 소실 버그 수정
mergeTracksByVessel()이 입력 리스트를 그대로 반환할 때 allTracks.clear()가 반환값까지 비우는 문제

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 11:18:36 +09:00
9d6e5ca408 Merge pull request 'release: 2026-03-09 (119건 커밋)' (#96) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 8m12s
2026-03-09 06:51:51 +09:00
0a115e4981 docs: 릴리즈 노트 정리 (2026-03-09) 2026-03-09 06:48:51 +09:00
5cf528fa72 Merge pull request 'chore: 운영 로그 레벨 정리 + daily 파티션 영구 보존' (#95) from feature/logging-and-partition-tuning into develop 2026-03-09 06:46:46 +09:00
d5ba32b308 docs: 릴리즈 노트 업데이트 2026-03-09 06:46:00 +09:00
9ffaf35aeb chore: 운영 로그 레벨 정리 + daily 파티션 영구 보존
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 06:05:10 +09:00
882c07a7c6 Merge pull request 'release: 2026-03-08 (115건 커밋)' (#94) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 8m12s
2026-03-08 09:33:03 +09:00
fab931c128 docs: 릴리즈 노트 정리 (2026-03-08)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 09:32:09 +09:00
66707e93cb Merge pull request 'feat: L3 Daily 캐시 DP 사전 간소화 + 14일 확대' (#93) from feature/cache-dp-simplification into develop 2026-03-08 09:30:20 +09:00
f628d381bb docs: 릴리즈 노트 업데이트
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 09:29:26 +09:00
0a0109fa7e feat: L3 Daily 캐시 DP 사전 간소화 + 14일 확대
- CacheTrackSimplifier: simplifyDpOnly() (DP-only 간소화), recalculateSpeeds() (Haversine 속도 재계산) 추가
- DailyTrackCacheManager: loadDay() 시 DP 사전 간소화 적용 (tolerance=0.001, ~100m)
- Daily 캐시 retention 7→14일, maxMemory 6→10GB
- Query/Batch DataSource: work_mem 256MB, synchronous_commit off 세션 튜닝

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 09:21:00 +09:00
c3a2ac3dea chore: 팀 워크플로우 v1.6.1 동기화
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 09:20:41 +09:00
ff6b8e6073 chore: CLAUDE_BOT_TOKEN 갱신 2026-03-06 08:00:11 +09:00
2434b3ddb2 Merge pull request 'release: 2026-03-02.3 (109건 커밋)' (#92) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m42s
2026-03-02 16:52:33 +09:00
cb41337e22 docs: 릴리즈 노트 정리 (2026-03-02) 2026-03-02 16:48:06 +09:00
2436751434 Merge pull request 'fix(websocket): cancelQuery idempotent 처리 — 완료된 쿼리 취소 시 에러 대신 정상 응답' (#91) from feature/fix-cancel-query-and-quality into develop 2026-03-02 16:46:02 +09:00
bfed21dcb4 docs: 릴리즈 노트 업데이트 2026-03-02 16:44:46 +09:00
4fbf130326 fix(websocket): cancelQuery idempotent 처리 — 완료된 쿼리 취소 시 에러 대신 정상 응답
- parseTimestamp 실패 로깅 추가 (AreaSearchService)
- isNightTimeContact 야간 판정 로직 단순화 (VesselContactService)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 16:44:06 +09:00
7a21d5b8b0 Merge pull request 'release: 2026-03-02.2 (105건 커밋)' (#90) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m29s
2026-03-02 15:40:24 +09:00
69b8ce8adc docs: 릴리즈 노트 정리 (2026-03-02)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 15:39:18 +09:00
95b1ba6913 Merge pull request 'refactor(websocket): ChunkedTrackStreamingService 전수 최적화' (#89) from feature/websocket-replay-optimization into develop 2026-03-02 15:34:41 +09:00
242c2d13f5 docs: 릴리즈 노트 업데이트
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 15:28:45 +09:00
076cb6f8fe refactor(websocket): ChunkedTrackStreamingService 전수 최적화 — 버그수정 + 메트릭 DB + 데드코드 제거
Phase A: 버그 수정
- isQueryCancelled: queryCancelFlags 전체 순회 → queryId 직접 조회 (O(n)→O(1))
  한 쿼리 취소 시 서버 전체 쿼리 조기 종료되던 치명적 버그 해결
- QueryContext 내부 클래스 추출: 싱글턴 인스턴스 변수 5개를 쿼리별 로컬로 전환
  동시 쿼리 간 상태 교차 오염 원천 차단
- 대기열 타임아웃: 하드코딩 120초 → ActiveQueryManager 설정값 사용

Phase B: 쿼리 메트릭 DB 저장
- QueryMetricsService: signal.t_query_metrics 비동기 INSERT
- QueryMetricsController: GET /api/monitoring/query-metrics, /stats
- streamChunkedTracks finally 블록에서 자동 저장 (QueryBenchmark 데이터 연동)

Phase C: N+1 해소 + 데드코드 제거
- VesselInfo 배치 프리로드: viewportVesselIds 수집 후 1회 배치 조회
- 미사용 코드 ~400줄 삭제: simplificationStrategy, executorService, processQueryInChunks,
  batchGetVesselInfo, processChunk, selectTableByTimeRange, groupRangesByDate 등

Phase D: 코드 품질
- WKTReader: 싱글턴 공유 → ThreadLocal (스레드 안전성)
- avgSpeed 계산: 4곳 중복 → calculateAvgSpeed() 헬퍼 추출

2,984줄 → 2,575줄 (409줄 삭감)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 15:10:12 +09:00
119e8e5238 Merge pull request 'refactor: SignalKindCode 매핑 규칙 개선 — shipName BUOY 검출 + 치환 1회화' (#88) from feature/signal-kind-code-refactor into develop 2026-03-02 13:45:19 +09:00
e0fc760754 docs: 릴리즈 노트 업데이트 2026-03-02 13:44:23 +09:00
5e035f0362 refactor: SignalKindCode 매핑 규칙 개선 — shipName BUOY 검출 + 치환 1회화 + 응답 경로 최적화
- SignalKindCode 매핑 변경: aton→DEFAULT, tug/tender→DEFAULT,
  Vessel+towing/dredging/diving→DEFAULT, Vessel+leisure→DEFAULT
- shipName 기반 BUOY 검출: '.' '_' 문자 2개 이상 → BUOY
- 캐시 저장 시 1회 치환, API 응답 시 DB/캐시 값 직접 사용
- 응답 경로 6곳 resolve() 재계산 제거

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 13:42:36 +09:00
bbd14fab8c Merge pull request 'release: 2026-03-02 (98건 커밋)' (#87) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m22s
2026-03-02 00:21:39 +09:00
007af70166 docs: 릴리즈 노트 정리 (2026-03-02) 2026-03-02 00:20:41 +09:00
047117033b Merge pull request 'feat: 캐시 O(1) 조회 + 메모리 예산 관리 + L2 블록 간소화 포팅' (#86) from feature/perf-cache-optimization into develop 2026-03-02 00:13:55 +09:00
b95e0f1d1c docs: 릴리즈 노트 업데이트 2026-03-02 00:12:41 +09:00
322b04b309 feat: 캐시 O(1) 조회 + 메모리 예산 관리 + L2 블록 간소화 포팅
Phase 1: L1/L2/L3 캐시 키 기반 직접 O(1) 조회 (전체 스캔 대체)
Phase 2: 64GB JVM 메모리 예산 논리적 파티셔닝 (캐시 35GB/쿼리 20GB)
Phase 3: L2 HourlyTrackCache 6시간 경과 엔트리 Nth-point 간소화

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 00:07:31 +09:00
5d1db92857 Merge pull request 'chore: settings.json에 CLAUDE_BOT_TOKEN 환경변수 추가' (#85) from feature/dashboard-phase-1 into develop 2026-03-01 23:12:39 +09:00
77613af4be Merge pull request 'perf: API 응답 크기 최적화 + Swagger 최신화' (#84) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m16s
2026-03-01 22:55:52 +09:00
82ae2651e1 Merge pull request 'perf: API 응답 크기 최적화 + Swagger 최신화' (#83) from feature/dashboard-phase-1 into develop 2026-03-01 22:55:26 +09:00
57226ef2a9 Merge pull request 'feat: DataPipeline 일별 차트 시각화 개선 — Stacked Bar + Duration Bar' (#82) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m49s
2026-02-21 12:29:12 +09:00
1e0656632a Merge pull request 'feat: DataPipeline 일별 차트 시각화 개선 — Stacked Bar + Duration Bar' (#81) from feature/dashboard-phase-1 into develop 2026-02-21 12:28:53 +09:00
8c1cfdd6b5 Merge pull request 'fix: ST_AsText WKT 공백 불일치로 인한 daily merge 전량 필터 수정' (#80) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 4m12s
2026-02-21 01:17:02 +09:00
9fcd374f4b Merge pull request 'fix: ST_AsText WKT 공백 불일치로 인한 daily merge 전량 필터 수정' (#79) from feature/dashboard-phase-1 into develop 2026-02-21 01:16:40 +09:00
51780b80bb Merge pull request 'fix: L2 워밍업 범위 확장 — Daily Job 전 기동 시 어제 데이터 포함' (#78) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m8s
2026-02-21 00:43:44 +09:00
9f16fee8b2 Merge pull request 'fix: L2 워밍업 범위 확장 — Daily Job 전 기동 시 어제 데이터 포함' (#77) from feature/dashboard-phase-1 into develop 2026-02-21 00:43:24 +09:00
e16aa2d645 Merge pull request 'chore: L2 HourlyTrackCache maxSize 3.5M→7M 상향' (#76) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 4m33s
2026-02-21 00:36:48 +09:00
397164e6cd Merge pull request 'chore: L2 HourlyTrackCache maxSize 3.5M→7M 상향' (#75) from feature/dashboard-phase-1 into develop 2026-02-21 00:36:18 +09:00
893a54ec8e Merge pull request 'chore: L2 HourlyTrackCache maxSize 3.5M→5M 상향' (#74) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m25s
2026-02-21 00:28:28 +09:00
e28d49958a Merge pull request 'chore: L2 HourlyTrackCache maxSize 3.5M→5M 상향' (#73) from feature/dashboard-phase-1 into develop 2026-02-21 00:28:03 +09:00
a0963a1332 Merge pull request 'fix: html2canvas oklch/oklab 색상 파싱 에러 수정' (#72) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m8s
2026-02-20 20:50:42 +09:00
b7f473b0da Merge pull request 'fix: html2canvas oklch/oklab 색상 파싱 에러 수정' (#71) from feature/dashboard-phase-1 into develop 2026-02-20 20:50:17 +09:00
3f55f80d2b Merge pull request 'feat: UI 레이아웃 수정 + 구역분석/STS 보고서 모달 + 이미지 저장' (#70) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 5m10s
2026-02-20 18:46:55 +09:00
fa2610c9a1 Merge pull request 'feat: UI 레이아웃 수정 + 구역분석/STS 보고서 모달 + 이미지 저장' (#69) from feature/dashboard-phase-1 into develop 2026-02-20 18:46:30 +09:00
d3520a6fd8 Merge pull request 'feat: 다중구역이동 항적 분석 + STS 접촉 분석 프론트엔드 이관' (#68) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m47s
2026-02-20 17:08:13 +09:00
efe6073cc7 Merge pull request 'feat: 다중구역이동 항적 분석 + STS 접촉 분석 프론트엔드 이관' (#67) from feature/dashboard-phase-1 into develop 2026-02-20 17:07:45 +09:00
a7b9e76d51 Merge pull request 'feat: 항적/리플레이 선종 아이콘 + Raw Data 패널' (#66) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m41s
2026-02-20 16:00:18 +09:00
b5bb22ea14 Merge pull request 'feat: 항적/리플레이 선종 아이콘 + Raw Data 패널' (#65) from feature/dashboard-phase-1 into develop 2026-02-20 15:59:56 +09:00
9ae56f5517 Merge pull request 'fix: 항적 조회 500 에러 + 리플레이 쿼리 무반응 수정' (#64) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m35s
2026-02-20 15:37:39 +09:00
33acf86277 Merge pull request 'fix: 항적 조회 500 에러 + 리플레이 쿼리 무반응 수정' (#63) from feature/dashboard-phase-1 into develop 2026-02-20 15:37:05 +09:00
137a22a411 Merge pull request 'feat: Ship-GIS 기능 이관 — 최근위치/선박항적/뷰포트 리플레이' (#62) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 4m0s
2026-02-20 15:22:11 +09:00
e0dc0b855a Merge pull request 'feat: Ship-GIS 기능 이관 — 최근위치/선박항적/뷰포트 리플레이' (#61) from feature/dashboard-phase-1 into develop 2026-02-20 15:21:41 +09:00
14e61e6bd0 Merge pull request 'perf: Daily Job 인메모리 캐시 기반 최적화 — N+1 SQL 제거' (#60) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m52s
2026-02-20 11:39:54 +09:00
fc7beac9f7 Merge pull request 'fix: shipimg path conflict' (#59) from feature/dashboard-phase-1 into develop 2026-02-20 11:39:32 +09:00
9a1d4b7b2e Merge pull request 'feat: recent-positions IMO + 선박사진 enrichment' (#57) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m46s
2026-02-20 02:18:23 +09:00
60f24a61a5 Merge pull request 'fix: shipimg 경로 충돌 수정' (#58) from feature/dashboard-phase-1 into develop 2026-02-20 02:17:35 +09:00
a2ae39a232 Merge pull request 'feat: recent-positions 선박사진 enrichment' (#56) from feature/dashboard-phase-1 into develop 2026-02-20 02:12:07 +09:00
c2a0c43fd6 Merge pull request 'feat: recent-positions IMO 필드 + 선박사진 보유 목록 API' (#54) from feature/dashboard-phase-1 into develop 2026-02-20 02:05:52 +09:00
0caea9c766 Merge pull request 'fix: UTC 타임존 변환 + Daily 캐시 부분 fallback 추가' (#53) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m20s
2026-02-20 00:47:45 +09:00
f5738978ed Merge pull request 'fix: UTC 타임존 변환 + Daily 캐시 부분 fallback 추가' (#52) from feature/dashboard-phase-1 into develop 2026-02-20 00:47:10 +09:00
e9aa0302cd Merge pull request 'feat: 중국허가선박 최신 위치 조회 API' (#51) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m34s
2026-02-19 23:04:28 +09:00
c8369e193f Merge pull request 'feat: 중국허가선박 최신 위치 조회 API' (#50) from feature/dashboard-phase-1 into develop 2026-02-19 23:04:22 +09:00
9f98682347 Merge pull request 'fix: V2 캐시 조회 시 누락 MMSI DB fallback 추가' (#49) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m48s
2026-02-19 22:53:19 +09:00
b7dc6eacf8 Merge pull request 'fix: V2 캐시 조회 시 누락 MMSI DB fallback 추가' (#48) from feature/dashboard-phase-1 into develop 2026-02-19 22:52:42 +09:00
e6cb152d29 Merge pull request 'feat: ChnPrmShip 전용 DB 이력 + API enrichment + ShipImage V2' (#47) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m35s
2026-02-19 22:38:17 +09:00
3669b837f1 Merge pull request 'feat: ChnPrmShip 전용 DB 이력 + API enrichment + ShipImage V2' (#46) from feature/dashboard-phase-1 into develop 2026-02-19 22:37:58 +09:00
85c8b5b28e Merge pull request 'docs: Swagger UI 현행화' (#45) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m34s
2026-02-19 21:30:31 +09:00
198afc1fdc Merge pull request 'docs: Swagger UI 현행화 — 서버 URL, @Schema, @Parameter' (#44) from feature/dashboard-phase-1 into develop 2026-02-19 21:30:06 +09:00
b581233240 Merge pull request 'fix: 캐시 maxSize 설정 경로 수정' (#43) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m36s
2026-02-19 21:00:42 +09:00
f34d4921b7 Merge pull request 'fix: 캐시 maxSize 설정 경로 수정' (#42) from feature/dashboard-phase-1 into develop 2026-02-19 20:53:18 +09:00
a237648fe7 Merge pull request 'fix: L1/L2 캐시 maxSize 상향 + AisTarget hitRate 타입 수정' (#41) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 3m10s
2026-02-19 20:47:46 +09:00
10e99b6cee Merge pull request 'fix: L1/L2 캐시 maxSize 상향 + AisTarget hitRate 타입 수정' (#40) from feature/dashboard-phase-1 into develop 2026-02-19 20:47:45 +09:00
5cf6e32d71 Merge pull request 'perf: API 응답 최적화 + 점진적 렌더링 + 해구 choropleth 지도' (#39) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m35s
2026-02-19 20:26:23 +09:00
8e7e8ff2de Merge pull request 'perf: API 응답 최적화 + 점진적 렌더링 + 해구 choropleth 지도' (#38) from feature/dashboard-phase-1 into develop 2026-02-19 20:25:52 +09:00
c3d0b15f97 Merge pull request 'feat: Phase 4 — 비정상 항적 + 시스템 메트릭 (7/7 완성)' (#37) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m40s
2026-02-19 19:20:31 +09:00
97b71b16e1 Merge pull request 'feat: Phase 4 — 비정상 항적 + 시스템 메트릭 (7/7 완성)' (#36) from feature/dashboard-phase-1 into develop 2026-02-19 19:20:21 +09:00
6f17006811 Merge pull request 'feat: Phase 3 — API Explorer 지도 스캐폴딩' (#35) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m35s
2026-02-19 19:14:40 +09:00
20cb4b3337 Merge pull request 'feat: Phase 3 — API Explorer 지도 스캐폴딩' (#34) from feature/dashboard-phase-1 into develop 2026-02-19 19:14:30 +09:00
f49f1ac4e4 Merge pull request 'perf: L1/L2 캐시 maxSize 상향 (실측 기반)' (#33) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m45s
2026-02-19 19:10:05 +09:00
c44075d72b Merge pull request 'perf: L1/L2 캐시 maxSize 상향 (실측 기반)' (#32) from feature/dashboard-phase-1 into develop 2026-02-19 19:09:55 +09:00
eabeee1bb7 Merge pull request 'fix: 해구 통계 ROUND 함수 타입 캐스팅 오류 수정' (#31) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m31s
2026-02-19 19:01:28 +09:00
9ca3057abd Merge pull request 'fix: 해구 통계 ROUND 함수 타입 캐스팅 오류 수정' (#30) from feature/dashboard-phase-1 into develop 2026-02-19 19:01:17 +09:00
ec20dc8e76 Merge pull request 'fix: 해구 조회 바운딩 박스 간소화' (#29) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m36s
2026-02-19 18:54:44 +09:00
908f047f55 Merge pull request 'fix: 해구 조회 바운딩 박스 간소화' (#28) from feature/dashboard-phase-1 into develop 2026-02-19 18:54:35 +09:00
e2a692a5e2 Merge pull request 'fix: 해구 통계 디버그 로깅' (#27) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m31s
2026-02-19 18:50:25 +09:00
c72e3cad36 Merge pull request 'debug: 해구 조회 에러 로깅' (#24) from feature/dashboard-phase-1 into develop 2026-02-19 18:50:13 +09:00
86f0c457e3 Merge pull request 'fix: toLocalDateTime 변환 강화' (#23) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m29s
2026-02-19 18:39:54 +09:00
b46b9335a0 Merge pull request 'fix: toLocalDateTime 변환 강화' (#22) from feature/dashboard-phase-1 into develop 2026-02-19 18:39:50 +09:00
1c963b3d75 Merge pull request 'fix: Timestamp 캐스팅 오류 수정' (#21) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m31s
2026-02-19 18:36:19 +09:00
8911259f29 Merge pull request 'fix: Timestamp 캐스팅 오류 수정' (#20) from feature/dashboard-phase-1 into develop 2026-02-19 18:35:35 +09:00
065f14ede4 Merge pull request 'fix: MonitoringController 레거시 타일 쿼리 전환 + 해구 통계 수정' (#19) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m45s
2026-02-19 18:30:55 +09:00
1f209954bf Merge pull request 'fix: MonitoringController 레거시 타일 쿼리 → AIS 위치/항적 기반 전환' (#18) from feature/dashboard-phase-1 into develop 2026-02-19 18:30:39 +09:00
e70def6611 Merge pull request 'chore: AIS API 인증 정보 추가 (prod)' (#17) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m32s
2026-02-19 18:08:56 +09:00
e8f17dbd9a Merge pull request 'chore: AIS API 인증 정보 추가' (#16) from feature/dashboard-phase-1 into develop 2026-02-19 18:08:53 +09:00
318d2aefbb Merge pull request 'Release: Phase 2 — DataPipeline + AreaStats' (#15) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m31s
2026-02-19 17:56:58 +09:00
53e8c2eb02 Merge pull request 'feat: Phase 2 — 데이터 파이프라인 + 해구 통계 페이지' (#14) from feature/dashboard-phase-1 into develop 2026-02-19 17:56:20 +09:00
478aa44e59 Merge pull request 'Release: Dashboard API 연동 오류 수정 + Phase 1 안정화' (#13) from develop into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m31s
2026-02-19 17:35:31 +09:00
dca887b292 Merge pull request 'fix: Dashboard API 연동 오류 수정 — 캐시 모니터링 + 렌더링 안전성' (#12) from feature/dashboard-phase-1 into develop 2026-02-19 17:34:23 +09:00
997dad8117 Merge pull request 'feat: React 19 SPA Dashboard Phase 1 + 안전 배포 시스템' (#11) from feature/dashboard-phase-1 into main
All checks were successful
Build & Deploy / build-and-deploy (push) Successful in 2m30s
Reviewed-on: #11
2026-02-19 17:09:18 +09:00
76f71fb374 Merge pull request 'feat: React 19 SPA Dashboard Phase 1 + 안전 배포 시스템' (#10) from feature/dashboard-phase-1 into develop
Reviewed-on: #10
2026-02-19 17:09:04 +09:00
941fb3cf4a Merge pull request 'release: Gitea Actions CI/CD 파이프라인 + systemd 서비스 구성' (#9) from develop into main
Some checks failed
Build & Deploy / build-and-deploy (push) Failing after 3m6s
Reviewed-on: #9
2026-02-19 14:32:05 +09:00
fb1a9f21f2 Merge pull request 'ci: Gitea Actions CI/CD 파이프라인 + systemd 서비스 구성' (#8) from feature/multilevel-track-cache into develop 2026-02-19 14:30:37 +09:00
b9ace1681c Merge pull request 'release: SNP API 전환 + 인메모리 캐시 최적화 + 다계층 캐시 조회 통합' (#7) from develop into main
Reviewed-on: #7
2026-02-19 14:26:30 +09:00
29bf116246 Merge pull request 'feat: 다계층 인메모리 캐시(L1/L2/L3) 조회 통합 + CACHE-MONITOR 로그' (#6) from feature/multilevel-track-cache into develop 2026-02-19 13:34:22 +09:00
bfc4614ce7 Merge pull request 'feat: Stale 데이터 비정상 궤적 전환 + vesselStatic N+1 쿼리 제거' (#5) from feature/stale-data-abnormal-track into develop 2026-02-19 13:34:12 +09:00
5f0ef5e812 Merge pull request 'perf: Hourly Job 인메모리 병합 전환 — N+1 SQL 제거' (#4) from feature/hourly-inmemory-merge into develop 2026-02-19 13:33:30 +09:00
daab14f0ad Merge pull request 'chore: 팀 워크플로우 초기 구성 + SNP API 전환 및 레거시 정리' (#3) from chore/team-workflow-init into develop 2026-02-19 13:32:37 +09:00
e3816e6ecb Merge pull request 'fix(rules): SLF4J 로깅 지침 추가' (#2) from fix/logging-guideline into develop 2026-02-19 07:29:27 +09:00
636760dd1d fix(rules): SLF4J 로깅 지침 추가 (printf 포맷 사용 금지) 2026-02-19 07:29:13 +09:00
b749417fc5 Merge pull request 'chore: 팀 워크플로우 v1.2.0 초기 구성 (java-maven)' (#1) from chore/team-workflow-init into develop
Reviewed-on: #1
2026-02-18 20:56:04 +09:00
68개의 변경된 파일4381개의 추가작업 그리고 1443개의 파일을 삭제

파일 보기

@ -44,21 +44,7 @@
- `@Builder` 허용 - `@Builder` 허용
- `@Data` 사용 금지 (명시적으로 필요한 어노테이션만) - `@Data` 사용 금지 (명시적으로 필요한 어노테이션만)
- `@AllArgsConstructor` 단독 사용 금지 (`@Builder`와 함께 사용) - `@AllArgsConstructor` 단독 사용 금지 (`@Builder`와 함께 사용)
- `@Slf4j` 로거 사용
## 로깅
- `@Slf4j` (Lombok) 로거 사용
- SLF4J `{}` 플레이스홀더에 printf 포맷 사용 금지 (`{:.1f}`, `{:d}`, `{%s}` 등)
- 숫자 포맷이 필요하면 `String.format()`으로 변환 후 전달
```java
// 잘못됨
log.info("처리율: {:.1f}%", rate);
// 올바름
log.info("처리율: {}%", String.format("%.1f", rate));
```
- 예외 로깅 시 예외 객체는 마지막 인자로 전달 (플레이스홀더 불필요)
```java
log.error("처리 실패: {}", id, exception);
```
## 예외 처리 ## 예외 처리
- 비즈니스 예외는 커스텀 Exception 클래스 정의 - 비즈니스 예외는 커스텀 Exception 클래스 정의

파일 보기

@ -1,7 +1,7 @@
{ {
"$schema": "https://json.schemastore.org/claude-code-settings.json", "$schema": "https://json.schemastore.org/claude-code-settings.json",
"env": { "env": {
"CLAUDE_BOT_TOKEN": "4804f9f63e799e25d9a8b381e89c8bff11471b7a" "CLAUDE_BOT_TOKEN": "ac15488ad66463bd5c4e3be1fa6dd5b2743813c5"
}, },
"permissions": { "permissions": {
"allow": [ "allow": [

파일 보기

@ -46,72 +46,94 @@ curl -sf "${GITEA_URL}/gc/template-react-ts/raw/branch/develop/.editorconfig"
### 3. .claude/ 디렉토리 구성 ### 3. .claude/ 디렉토리 구성
이미 팀 표준 파일이 존재하면 건너뜀. 없는 경우 위의 URL 패턴으로 Gitea에서 다운로드: 이미 팀 표준 파일이 존재하면 건너뜀. 없는 경우 위의 URL 패턴으로 Gitea에서 다운로드:
- `.claude/settings.json` — 프로젝트 타입별 표준 권한 설정 + env(CLAUDE_BOT_TOKEN 등) + hooks 섹션 (4단계 참조) - `.claude/settings.json` — 프로젝트 타입별 표준 권한 설정 + env(CLAUDE_BOT_TOKEN 등) + hooks 섹션 (4단계 참조)
- `.claude/rules/` — 팀 규칙 파일 (team-policy, git-workflow, code-style, naming, testing)
- `.claude/skills/` — 팀 스킬 (create-mr, fix-issue, sync-team-workflow, init-project)
### 4. Hook 스크립트 생성 ⚠️ 팀 규칙(.claude/rules/), 에이전트(.claude/agents/), 스킬 6종, 스크립트는 12단계(sync-team-workflow)에서 자동 다운로드된다. 여기서는 settings.json만 설정한다.
`.claude/scripts/` 디렉토리를 생성하고 다음 스크립트 파일 생성 (chmod +x):
- `.claude/scripts/on-pre-compact.sh`: ### 3.5. Gitea 토큰 설정
**CLAUDE_BOT_TOKEN** (팀 공용): `settings.json``env` 필드에 이미 포함되어 있음 (3단계에서 설정됨). 별도 조치 불필요.
**GITEA_TOKEN** (개인): `/push`, `/mr`, `/release` 등 Git 스킬에 필요한 개인 토큰.
```bash ```bash
#!/bin/bash # 현재 GITEA_TOKEN 설정 여부 확인
# PreCompact hook: systemMessage만 지원 (hookSpecificOutput 사용 불가) if [ -z "$GITEA_TOKEN" ]; then
INPUT=$(cat) echo "GITEA_TOKEN 미설정"
cat <<RESP
{
"systemMessage": "컨텍스트 압축이 시작됩니다. 반드시 다음을 수행하세요:\n\n1. memory/MEMORY.md - 핵심 작업 상태 갱신 (200줄 이내)\n2. memory/project-snapshot.md - 변경된 패키지/타입 정보 업데이트\n3. memory/project-history.md - 이번 세션 변경사항 추가\n4. memory/api-types.md - API 인터페이스 변경이 있었다면 갱신\n5. 미완료 작업이 있다면 TodoWrite에 남기고 memory에도 기록"
}
RESP
```
- `.claude/scripts/on-post-compact.sh`:
```bash
#!/bin/bash
INPUT=$(cat)
CWD=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('cwd',''))" 2>/dev/null || echo "")
if [ -z "$CWD" ]; then
CWD=$(pwd)
fi
PROJECT_HASH=$(echo "$CWD" | sed 's|/|-|g')
MEMORY_DIR="$HOME/.claude/projects/$PROJECT_HASH/memory"
CONTEXT=""
if [ -f "$MEMORY_DIR/MEMORY.md" ]; then
SUMMARY=$(head -100 "$MEMORY_DIR/MEMORY.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="컨텍스트가 압축되었습니다.\\n\\n[세션 요약]\\n${SUMMARY}"
fi
if [ -f "$MEMORY_DIR/project-snapshot.md" ]; then
SNAP=$(head -50 "$MEMORY_DIR/project-snapshot.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="${CONTEXT}\\n\\n[프로젝트 최신 상태]\\n${SNAP}"
fi
if [ -n "$CONTEXT" ]; then
CONTEXT="${CONTEXT}\\n\\n위 내용을 참고하여 작업을 이어가세요. 상세 내용은 memory/ 디렉토리의 각 파일을 참조하세요."
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"${CONTEXT}\"}}"
else
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"컨텍스트가 압축되었습니다. memory 파일이 없으므로 사용자에게 이전 작업 내용을 확인하세요.\"}}"
fi fi
``` ```
- `.claude/scripts/on-commit.sh`: **GITEA_TOKEN이 없는 경우**, 다음 안내를 **AskUserQuestion**으로 표시:
**질문**: "GITEA_TOKEN이 설정되지 않았습니다. Gitea 개인 토큰을 생성하시겠습니까?"
- 옵션 1: 토큰 생성 안내 보기 (추천)
- 옵션 2: 이미 있음 (토큰 입력)
- 옵션 3: 나중에 하기
**토큰 생성 안내 선택 시**, 다음 내용을 표시:
```
📋 Gitea 토큰 생성 방법:
1. 브라우저에서 접속:
https://gitea.gc-si.dev/user/settings/applications
2. "Manage Access Tokens" 섹션에서 "Generate New Token" 클릭
3. 입력:
- Token Name: "claude-code" (자유롭게 지정)
- Repository and Organization Access: ✅ All (public, private, and limited)
4. Select permissions (아래 4개만 설정, 나머지는 No Access 유지):
┌─────────────────┬──────────────────┬──────────────────────────────┐
│ 항목 │ 권한 │ 용도 │
├─────────────────┼──────────────────┼──────────────────────────────┤
│ issue │ Read and Write │ /fix-issue 이슈 조회/코멘트 │
│ organization │ Read │ gc 조직 리포 접근 │
│ repository │ Read and Write │ /push, /mr, /release API 호출 │
│ user │ Read │ API 사용자 인증 확인 │
└─────────────────┴──────────────────┴──────────────────────────────┘
5. "Generate Token" 클릭 → ⚠️ 토큰이 한 번만 표시됩니다! 반드시 복사하세요.
```
표시 후 **AskUserQuestion**: "생성한 토큰을 입력하세요"
- 옵션 1: 토큰 입력 (Other로 입력)
- 옵션 2: 나중에 하기
**토큰 입력 시**:
1. Gitea API로 유효성 검증:
```bash ```bash
#!/bin/bash curl -sf "https://gitea.gc-si.dev/api/v1/user" \
INPUT=$(cat) -H "Authorization: token <입력된 토큰>"
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('tool_input',{}).get('command',''))" 2>/dev/null || echo "") ```
if echo "$COMMAND" | grep -qE 'git commit'; then - 성공: `✅ <login> (<full_name>) 인증 확인` 출력
cat <<RESP - 실패: `❌ 토큰이 유효하지 않습니다. 다시 확인해주세요.` 출력 → 재입력 요청
2. `.claude/settings.local.json`에 저장 (이 파일은 .gitignore에 포함, 리포 커밋 안됨):
```json
{ {
"hookSpecificOutput": { "env": {
"additionalContext": "커밋이 감지되었습니다. 다음을 수행하세요:\n1. docs/CHANGELOG.md에 변경 내역 추가\n2. memory/project-snapshot.md에서 변경된 부분 업데이트\n3. memory/project-history.md에 이번 변경사항 추가\n4. API 인터페이스 변경 시 memory/api-types.md 갱신\n5. 프로젝트에 lint 설정이 있다면 lint 결과를 확인하고 문제를 수정" "GITEA_TOKEN": "<입력된 토큰>"
} }
} }
RESP
else
echo '{}'
fi
``` ```
기존 `settings.local.json`이 있으면 `env.GITEA_TOKEN`만 추가/갱신.
**나중에 하기 선택 시**: 경고 표시 후 다음 단계로 진행:
```
⚠️ GITEA_TOKEN 없이는 /push, /mr, /release 스킬을 사용할 수 없습니다.
나중에 토큰을 생성하면 .claude/settings.local.json에 다음을 추가하세요:
{ "env": { "GITEA_TOKEN": "your-token-here" } }
```
### 4. Hook 스크립트 설정
⚠️ `.claude/scripts/` 스크립트 파일은 12단계(sync-team-workflow)에서 서버로부터 자동 다운로드된다.
여기서는 `settings.json`에 hooks 섹션만 설정한다.
`.claude/settings.json`에 hooks 섹션이 없으면 추가 (기존 settings.json의 내용에 병합): `.claude/settings.json`에 hooks 섹션이 없으면 추가 (기존 settings.json의 내용에 병합):
```json ```json
@ -199,6 +221,20 @@ chmod +x .githooks/*
*.local *.local
``` ```
**팀 워크플로우 관리 경로** (sync로 생성/관리되는 파일, 리포에 커밋하지 않음):
```
# Team workflow (managed by /sync-team-workflow)
.claude/rules/
.claude/agents/
.claude/skills/push/
.claude/skills/mr/
.claude/skills/create-mr/
.claude/skills/release/
.claude/skills/version/
.claude/skills/fix-issue/
.claude/scripts/
```
### 8. Git exclude 설정 ### 8. Git exclude 설정
`.git/info/exclude` 파일을 읽고, 기존 내용을 보존하면서 하단에 추가: `.git/info/exclude` 파일을 읽고, 기존 내용을 보존하면서 하단에 추가:
@ -242,7 +278,14 @@ curl -sf --max-time 5 "https://gitea.gc-si.dev/gc/template-common/raw/branch/dev
} }
``` ```
### 12. 검증 및 요약 ### 12. 팀 워크플로우 최신화
`/sync-team-workflow`를 자동으로 1회 실행하여 최신 팀 파일(rules, agents, skills 6종, scripts, hooks)을 서버에서 다운로드하고 로컬에 적용한다.
이 단계에서 `.claude/rules/`, `.claude/agents/`, `.claude/skills/push/` 등 팀 관리 파일이 생성된다.
(이 파일들은 7단계에서 .gitignore에 추가되었으므로 리포에 커밋되지 않음)
### 13. 검증 및 요약
- 생성/수정된 파일 목록 출력 - 생성/수정된 파일 목록 출력
- `git config core.hooksPath` 확인 - `git config core.hooksPath` 확인
- 빌드 명령 실행 가능 확인 - 빌드 명령 실행 가능 확인

파일 보기

@ -30,6 +30,43 @@ CAN_PUSH=$(echo "$PERMISSIONS" | python3 -c "import sys,json; print(json.load(sy
- `CAN_PUSH``False`이면: "MR 생성 권한이 필요합니다. 프로젝트 관리자에게 요청하세요." 안내 후 종료 - `CAN_PUSH``False`이면: "MR 생성 권한이 필요합니다. 프로젝트 관리자에게 요청하세요." 안내 후 종료
### 0.5. 팀 워크플로우 최신화 확인
`.claude/workflow-version.json`이 존재하지 않으면 이 단계를 건너뛴다 (팀 프로젝트가 아닌 경우).
```bash
# 로컬 설정 읽기
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null)
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null)
CUSTOM_PRECOMMIT=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('custom_pre_commit', False))" 2>/dev/null)
# 서버 해시 조회 (custom_pre_commit이면 pre-commit 제외 해시 사용)
SERVER_VER=$(curl -sf --max-time 5 "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes_custom_precommit',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
else
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
fi
# 로컬 해시 계산 (custom_pre_commit이면 .githooks/pre-commit 제외)
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f ! -path '.githooks/pre-commit' 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
else
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
fi
```
**비교 결과 처리**:
- **서버 조회 실패** (`SERVER_HASH` 비어있음): "⚠️ 서버 연결 불가, 워크플로우 체크를 건너뜁니다" 경고 후 다음 단계 진행
- **일치** (`LOCAL_HASH == SERVER_HASH`): 다음 단계 진행
- **불일치**: "⚠️ 팀 워크플로우가 최신이 아닙니다. 동기화를 실행합니다..." 출력 → **sync-team-workflow 절차를 자동 실행** → 완료 후 원래 작업 계속
### 1. 사전 검증 ### 1. 사전 검증
```bash ```bash

파일 보기

@ -30,6 +30,43 @@ CAN_PUSH=$(echo "$PERMISSIONS" | python3 -c "import sys,json; print(json.load(sy
- `CAN_PUSH``False`이면: "push 권한이 필요합니다. 프로젝트 관리자에게 요청하세요." 안내 후 종료 - `CAN_PUSH``False`이면: "push 권한이 필요합니다. 프로젝트 관리자에게 요청하세요." 안내 후 종료
### 0.5. 팀 워크플로우 최신화 확인
`.claude/workflow-version.json`이 존재하지 않으면 이 단계를 건너뛴다 (팀 프로젝트가 아닌 경우).
```bash
# 로컬 설정 읽기
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null)
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null)
CUSTOM_PRECOMMIT=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('custom_pre_commit', False))" 2>/dev/null)
# 서버 해시 조회 (custom_pre_commit이면 pre-commit 제외 해시 사용)
SERVER_VER=$(curl -sf --max-time 5 "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes_custom_precommit',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
else
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
fi
# 로컬 해시 계산 (custom_pre_commit이면 .githooks/pre-commit 제외)
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f ! -path '.githooks/pre-commit' 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
else
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
fi
```
**비교 결과 처리**:
- **서버 조회 실패** (`SERVER_HASH` 비어있음): "⚠️ 서버 연결 불가, 워크플로우 체크를 건너뜁니다" 경고 후 다음 단계 진행
- **일치** (`LOCAL_HASH == SERVER_HASH`): 다음 단계 진행
- **불일치**: "⚠️ 팀 워크플로우가 최신이 아닙니다. 동기화를 실행합니다..." 출력 → **sync-team-workflow 절차를 자동 실행** → 완료 후 원래 작업 계속
### 1. 현재 상태 수집 ### 1. 현재 상태 수집
```bash ```bash

파일 보기

@ -29,6 +29,43 @@ IS_ADMIN=$(echo "$PERMISSIONS" | python3 -c "import sys,json; print(json.load(sy
- `IS_ADMIN``False`이면: "릴리즈는 프로젝트 관리자만 실행할 수 있습니다." 안내 후 종료 - `IS_ADMIN``False`이면: "릴리즈는 프로젝트 관리자만 실행할 수 있습니다." 안내 후 종료
### 0.5. 팀 워크플로우 최신화 확인
`.claude/workflow-version.json`이 존재하지 않으면 이 단계를 건너뛴다 (팀 프로젝트가 아닌 경우).
```bash
# 로컬 설정 읽기
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null)
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null)
CUSTOM_PRECOMMIT=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('custom_pre_commit', False))" 2>/dev/null)
# 서버 해시 조회 (custom_pre_commit이면 pre-commit 제외 해시 사용)
SERVER_VER=$(curl -sf --max-time 5 "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes_custom_precommit',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
else
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
fi
# 로컬 해시 계산 (custom_pre_commit이면 .githooks/pre-commit 제외)
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f ! -path '.githooks/pre-commit' 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
else
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
fi
```
**비교 결과 처리**:
- **서버 조회 실패** (`SERVER_HASH` 비어있음): "⚠️ 서버 연결 불가, 워크플로우 체크를 건너뜁니다" 경고 후 다음 단계 진행
- **일치** (`LOCAL_HASH == SERVER_HASH`): 다음 단계 진행
- **불일치**: "⚠️ 팀 워크플로우가 최신이 아닙니다. 동기화를 실행합니다..." 출력 → **sync-team-workflow 절차를 자동 실행** → 완료 후 원래 작업 계속
### 1. 사전 검증 ### 1. 사전 검증
- 커밋되지 않은 변경 사항이 있으면 경고 ("먼저 /push로 커밋하세요") - 커밋되지 않은 변경 사항이 있으면 경고 ("먼저 /push로 커밋하세요")

파일 보기

@ -3,123 +3,163 @@ name: sync-team-workflow
description: 팀 글로벌 워크플로우를 현재 프로젝트에 동기화합니다 description: 팀 글로벌 워크플로우를 현재 프로젝트에 동기화합니다
--- ---
팀 글로벌 워크플로우의 최신 버전을 현재 프로젝트에 적용합니다. 팀 글로벌 워크플로우의 최신 파일을 서버에서 다운로드하여 로컬에 적용합니다.
호출 시 항상 서버 기준으로 전체 동기화합니다 (버전 비교 없음).
## 수행 절차 ## 수행 절차
### 1. 글로벌 버전 조회 ### 1. 사전 조건 확인
Gitea API로 template-common 리포의 workflow-version.json 조회:
`.claude/workflow-version.json` 존재 확인:
- 없으면 → "/init-project를 먼저 실행해주세요" 안내 후 종료
설정 읽기:
```bash ```bash
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null || echo "https://gitea.gc-si.dev") GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null || echo "https://gitea.gc-si.dev")
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null || echo "")
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json"
``` ```
### 2. 버전 비교 프로젝트 타입이 비어있으면 자동 감지:
로컬 `.claude/workflow-version.json``applied_global_version` 필드와 비교: 1. `pom.xml` → java-maven
- 버전 일치 → "최신 버전입니다" 안내 후 종료 2. `build.gradle` / `build.gradle.kts` → java-gradle
- 버전 불일치 → 미적용 변경 항목 추출하여 표시 3. `package.json` + `tsconfig.json` → react-ts
4. 감지 실패 → 사용자에게 선택 요청
### 3. 프로젝트 타입 감지
자동 감지 순서:
1. `.claude/workflow-version.json``project_type` 필드 확인
2. 없으면: `pom.xml` → java-maven, `build.gradle` → java-gradle, `package.json` → react-ts
### Gitea 파일 다운로드 URL 패턴 ### Gitea 파일 다운로드 URL 패턴
⚠️ Gitea raw 파일은 반드시 **web raw URL**을 사용해야 합니다 (`/api/v1/` 경로 사용 불가): ⚠️ Gitea raw 파일은 반드시 **web raw URL** 사용:
```bash ```bash
GITEA_URL="${GITEA_URL:-https://gitea.gc-si.dev}"
# common 파일: ${GITEA_URL}/gc/template-common/raw/branch/develop/<파일경로> # common 파일: ${GITEA_URL}/gc/template-common/raw/branch/develop/<파일경로>
# 타입별 파일: ${GITEA_URL}/gc/template-<타입>/raw/branch/develop/<파일경로> # 타입별 파일: ${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/<파일경로>
# 예시:
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/.claude/rules/team-policy.md"
curl -sf "${GITEA_URL}/gc/template-react-ts/raw/branch/develop/.editorconfig"
``` ```
### 4. 파일 다운로드 및 적용 ### 2. 디렉토리 준비
위의 URL 패턴으로 해당 타입 + common 템플릿 파일 다운로드:
#### 4-1. 규칙 파일 (덮어쓰기) 필요한 디렉토리가 없으면 생성:
팀 규칙은 로컬 수정 불가 — 항상 글로벌 최신으로 교체: ```bash
mkdir -p .claude/rules .claude/agents .claude/scripts
mkdir -p .claude/skills/push .claude/skills/mr .claude/skills/create-mr
mkdir -p .claude/skills/release .claude/skills/version .claude/skills/fix-issue
mkdir -p .githooks
```
### 3. 서버 파일 다운로드 + 적용
각 파일을 `curl -sf` 로 다운로드하여 프로젝트 루트의 동일 경로에 저장.
다운로드 실패한 파일은 경고 출력 후 건너뜀.
#### 3-1. template-common 파일 (덮어쓰기)
**규칙 파일**:
``` ```
.claude/rules/team-policy.md .claude/rules/team-policy.md
.claude/rules/git-workflow.md .claude/rules/git-workflow.md
.claude/rules/release-notes-guide.md .claude/rules/release-notes-guide.md
.claude/rules/subagent-policy.md .claude/rules/subagent-policy.md
.claude/rules/code-style.md (타입별)
.claude/rules/naming.md (타입별)
.claude/rules/testing.md (타입별)
``` ```
#### 4-1b. 에이전트 파일 (덮어쓰기) **에이전트 파일**:
``` ```
.claude/agents/explorer.md .claude/agents/explorer.md
.claude/agents/implementer.md .claude/agents/implementer.md
.claude/agents/reviewer.md .claude/agents/reviewer.md
``` ```
#### 4-2. settings.json (부분 갱신) **스킬 파일 (6종)**:
⚠️ settings.json은 **타입별 템플릿**에서 다운로드 (template-common에는 없음):
```bash
curl -sf "${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/.claude/settings.json"
``` ```
다운로드한 최신 settings.json과 로컬 settings.json을 비교하여 부분 갱신:
- `env`: 글로벌 최신으로 교체 (CLAUDE_BOT_TOKEN 등 팀 공통 환경변수)
- `deny` 목록: 글로벌 최신으로 교체
- `allow` 목록: 기존 사용자 커스텀 유지 + 글로벌 기본값 병합
- `hooks`: init-project SKILL.md의 hooks JSON 블록을 참조하여 교체 (없으면 추가)
- SessionStart(compact) → on-post-compact.sh
- PreCompact → on-pre-compact.sh
- PostToolUse(Bash) → on-commit.sh
#### 4-3. 스킬 파일 (덮어쓰기)
```
.claude/skills/create-mr/SKILL.md
.claude/skills/fix-issue/SKILL.md
.claude/skills/sync-team-workflow/SKILL.md
.claude/skills/init-project/SKILL.md
.claude/skills/push/SKILL.md .claude/skills/push/SKILL.md
.claude/skills/mr/SKILL.md .claude/skills/mr/SKILL.md
.claude/skills/create-mr/SKILL.md
.claude/skills/release/SKILL.md .claude/skills/release/SKILL.md
.claude/skills/version/SKILL.md .claude/skills/version/SKILL.md
.claude/skills/fix-issue/SKILL.md
``` ```
#### 4-4. Git Hooks (덮어쓰기 + 실행 권한) **Hook 스크립트**:
`commit-msg`, `post-checkout`**항상 팀 표준으로 교체** (팀 커뮤니케이션 규칙 + 인프라).
`pre-commit``.claude/workflow-version.json``custom_pre_commit` 플래그를 확인:
- `"custom_pre_commit": true` → pre-commit 건너뜀 (프로젝트 커스텀 유지), "⚠️ pre-commit은 프로젝트 커스텀 유지" 로그
- 플래그 없거나 false → 팀 표준으로 교체
```bash
chmod +x .githooks/*
```
#### 4-5. Hook 스크립트 갱신
init-project SKILL.md의 코드 블록에서 최신 스크립트를 추출하여 덮어쓰기:
``` ```
.claude/scripts/on-pre-compact.sh .claude/scripts/on-pre-compact.sh
.claude/scripts/on-post-compact.sh .claude/scripts/on-post-compact.sh
.claude/scripts/on-commit.sh .claude/scripts/on-commit.sh
``` ```
실행 권한 부여: `chmod +x .claude/scripts/*.sh`
### 5. 로컬 버전 업데이트 **Git Hooks** (commit-msg, post-checkout은 항상 교체):
`.claude/workflow-version.json` 갱신: ```
```json .githooks/commit-msg
{ .githooks/post-checkout
"applied_global_version": "새버전",
"applied_date": "오늘날짜",
"project_type": "감지된타입",
"gitea_url": "https://gitea.gc-si.dev"
}
``` ```
다운로드 예시:
```bash
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/.claude/rules/team-policy.md" -o ".claude/rules/team-policy.md"
```
#### 3-2. template-{type} 파일 (타입별 덮어쓰기)
```
.claude/rules/code-style.md
.claude/rules/naming.md
.claude/rules/testing.md
```
**pre-commit hook**:
`.claude/workflow-version.json``custom_pre_commit` 플래그 확인:
- `"custom_pre_commit": true` → pre-commit 건너뜀, "⚠️ pre-commit은 프로젝트 커스텀 유지" 로그
- 플래그 없거나 false → `.githooks/pre-commit` 교체
다운로드 예시:
```bash
curl -sf "${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/.claude/rules/code-style.md" -o ".claude/rules/code-style.md"
```
#### 3-3. 실행 권한 부여
```bash
chmod +x .githooks/* 2>/dev/null
chmod +x .claude/scripts/*.sh 2>/dev/null
```
### 4. settings.json 부분 머지
⚠️ settings.json은 **타입별 템플릿**에서 다운로드 (template-common에는 없음):
```bash
SERVER_SETTINGS=$(curl -sf "${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/.claude/settings.json")
```
다운로드한 최신 settings.json과 로컬 `.claude/settings.json`을 비교하여 부분 갱신:
- `env`: 서버 최신으로 교체
- `deny` 목록: 서버 최신으로 교체
- `allow` 목록: 기존 사용자 커스텀 유지 + 서버 기본값 병합
- `hooks`: 서버 최신으로 교체
### 5. workflow-version.json 갱신
서버의 최신 `workflow-version.json` 조회:
```bash
SERVER_VER=$(curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
SERVER_VERSION=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('version',''))")
```
`.claude/workflow-version.json` 업데이트:
```json
{
"applied_global_version": "<서버 version>",
"applied_date": "<현재날짜>",
"project_type": "<프로젝트타입>",
"gitea_url": "<GITEA_URL>"
}
```
기존 필드(`custom_pre_commit` 등)는 보존.
### 6. 변경 보고 ### 6. 변경 보고
- `git diff`로 변경 내역 확인
- 업데이트된 파일 목록 출력 - 다운로드/갱신된 파일 목록 출력
- 변경 로그(글로벌 workflow-version.json의 changes) 표시 - 서버 `workflow-version.json``changes` 중 최신 항목 표시
- 필요한 추가 조치 안내 (빌드 확인, 의존성 업데이트 등) - 결과 형태:
```
✅ 팀 워크플로우 동기화 완료
버전: v1.6.0
갱신 파일: 22개 (rules 7, agents 3, skills 6, scripts 3, hooks 3)
settings.json: 부분 갱신 (env, deny, hooks)
```
## 필요 환경변수
없음 (Gitea raw URL은 인증 불필요)

파일 보기

@ -1,6 +1,6 @@
{ {
"applied_global_version": "1.5.0", "applied_global_version": "1.6.1",
"applied_date": "2026-03-01", "applied_date": "2026-03-08",
"project_type": "java-maven", "project_type": "java-maven",
"gitea_url": "https://gitea.gc-si.dev" "gitea_url": "https://gitea.gc-si.dev"
} }

파일 보기

@ -109,8 +109,8 @@ jobs:
echo "--- Starting service ---" echo "--- Starting service ---"
systemctl start signal-batch systemctl start signal-batch
# 5단계: 기동 확인 (최대 90초 — 64GB 힙 AlwaysPreTouch) # 5단계: 기동 확인 (최대 180초 — 64GB 힙 AlwaysPreTouch + 캐시 워밍업)
for i in $(seq 1 90); do for i in $(seq 1 180); do
if curl -sf "$BASE_URL/actuator/health/liveness" > /dev/null 2>&1; then if curl -sf "$BASE_URL/actuator/health/liveness" > /dev/null 2>&1; then
echo "Service started successfully (${i}s)" echo "Service started successfully (${i}s)"
curl -s "$BASE_URL/actuator/health" curl -s "$BASE_URL/actuator/health"

파일 보기

@ -4,5 +4,139 @@
## [Unreleased] ## [Unreleased]
## [2026-03-27.3]
### 추가
- 비정상 궤적 포함 저장 플래그 (`include-abnormal-in-tracks`) — 강화학습 데이터 수집용
### 수정
- REST API 경로 client_id 수집 누락 수정 — JWT 쿠키 파싱 공용 메서드 추출
## [2026-03-27.2]
### 수정
- Top 클라이언트 IP/ID 토글 활성 상태 구분 및 표시 오류 수정
- 쿼리 이력(메트릭 페이지)에 사용자 ID 컬럼 추가
## [2026-03-27]
### 추가
- WebSocket 리플레이 쿼리 L1/L2 캐시 통합 — HOURLY/5MIN 구간 DB 의존 제거, 당일 쿼리 100% 캐시
- 쿼리 메트릭 사용자 ID 수집 — GC_SESSION JWT에서 인증된 사용자 email 추출
- 대시보드 Top 클라이언트 IP/ID 토글 — groupBy 파라미터로 IP 기준 또는 사용자 ID 기준 전환
### 수정
- vessel info SQL 컬럼명 오류 수정 (ship_nm → name) — 선박 정보 조회 실패("bad SQL grammar") 해결
## [2026-03-19]
### 변경
- CI/CD 배포 health check 대기 90초→180초 확장 — 64GB 힙 기동 타임아웃 대응
### 기타 ### 기타
- settings.json에 CLAUDE_BOT_TOKEN 환경변수 추가 - AIS API 접속 계정 변경
## [2026-03-18]
### 수정
- AIS Import Job 스케줄 :15초→:45초 변경 — API 서버 데이터 적재 타이밍 변경으로 빈 응답(0건) 빈발 대응
## [2026-03-17]
### 추가
- 최근 선박 위치 상세 조회 API (`POST /api/v1/vessels/recent-positions-detail`) — 공간 필터(폴리곤/원) + AIS 상세 필드(callSign, status, destination, eta, draught, length, width)
### 변경
- AIS API WebClient 버퍼 50MB→100MB 확장 — 피크 시 DataBufferLimitException 대응
## [2026-03-13]
### 추가
- 다중구역/STS API 최적화 — AreaSearch/VesselContact 동시성·메모리 관리 통합, 순차 통과 SQL 동적 N-구역(2~10) 확장, chnPrmShipOnly 파라미터 추가
### 변경
- 성능 최적화 — ArrayList 사전 할당, JTS Coordinate 재사용, equirectangular 거리 근사, stream→단일 루프 전환
- DataPipeline 대시보드 차트 시각화 개선
## [2026-03-10]
### 추가
- 쿼리 메트릭 수집 확장 + 대시보드 성능 차트 — client IP 수집(REST/WS), 응답 크기 추정, timeseries API, 대시보드 쿼리 성능 차트 5종(응답시간·볼륨·캐시경로·응답크기·Top 클라이언트)
- API/WS 쿼리 메트릭 이력 조회 기능 — BufferService(batch flush) + /history, /summary API + 프론트엔드 요약카드·필터·페이지네이션
## [2026-03-09]
### 수정
- queryWithCache 단일 소스(DB/캐시) 응답 소실 버그 수정 — mergeTracksByVessel() 참조 공유 시 allTracks.clear()로 결과 파괴
### 변경
- 운영 로그 레벨 정리 — CACHE-MONITOR 루틴 로그(putAll/get) DEBUG 전환, 중요 이벤트(removeRange/simplify) INFO 유지
- Spring Batch/HikariCP 로그 INFO→WARN 하향
### 기타
- t_vessel_tracks_daily 파티션 영구 보존 설정 추가 (기본 3개월→무한)
## [2026-03-08]
### 추가
- L3 Daily 캐시 DP(Douglas-Peucker) 사전 간소화 — tolerance 0.001(~100m)로 직선 구간 제거, 방향 변화 보존
- Daily 캐시 인메모리 보관 기간 7일→14일 확대 (maxMemory 6→10GB)
- 간소화 후 Haversine 기반 속도 재계산 (recalculateSpeeds)
### 변경
- Query DataSource: work_mem 256MB + synchronous_commit off 세션 튜닝
- Batch DataSource: synchronous_commit off 세션 튜닝
### 기타
- 팀 워크플로우 v1.5.0→v1.6.1 동기화
## [2026-03-02]
### 추가
- React 19 SPA Dashboard (7페이지: Dashboard, JobMonitor, DataPipeline, AreaStats, ApiExplorer, AbnormalTracks, ApiMetrics)
- 다계층 인메모리 캐시(L1/L2/L3) 조회 통합 + CACHE-MONITOR 로그
- Ship-GIS 기능 이관 — 최근위치/선박항적/뷰포트 리플레이
- 다중구역이동 항적 분석 + STS 접촉 분석 프론트엔드 이관
- 구역분석/STS 보고서 모달 + 이미지 저장
- 항적/리플레이 선종 아이콘 + Raw Data 패널
- DataPipeline 일별 차트 시각화 개선 — Stacked Bar + Duration Bar
- ChnPrmShip 전용 DB 이력 + API enrichment + ShipImage V2
- 중국허가선박 최신 위치 조회 API
- recent-positions IMO 필드 + 선박사진 보유 목록 API + 사진 enrichment
- Stale 데이터 비정상 궤적 전환 — 과거 timestamp 수신 시 정보 보존
- L1/L2/L3 캐시 O(1) 키 기반 직접 조회 (전체 스캔 O(n) 대체)
- 64GB JVM 메모리 예산 논리적 파티셔닝 (캐시 35GB / 쿼리 20GB / 시스템 9GB)
- L2 HourlyTrackCache 6시간 경과 엔트리 Nth-point 간소화 스케줄러
- 메모리 예산 모니터링 API (`GET /api/monitoring/cache/budget`)
### 수정
- cancelQuery idempotent 처리 — 완료된 쿼리 취소 시 에러 대신 정상 응답
- parseTimestamp 실패 로깅 추가, isNightTimeContact 야간 판정 로직 단순화
- ST_AsText WKT 공백 불일치로 인한 daily merge 전량 필터 수정
- L2 워밍업 범위 확장 — Daily Job 전 기동 시 어제 데이터 포함
- html2canvas oklch/oklab 색상 파싱 에러 수정
- 항적 조회 500 에러 + 리플레이 쿼리 무반응 수정
- shipimg 경로 충돌 수정 — /{imo} 숫자 패턴 제약 추가
- UTC 타임존 변환 + Daily 캐시 부분 fallback 추가
- V2 캐시 조회 시 누락 MMSI DB fallback 추가
- 캐시 maxSize 설정 경로 수정 — application.yml이 실제 소스
- 해구 통계 ROUND 함수 타입 캐스팅 오류 수정
- 해구 조회 ST_Contains 제거 — 바운딩 박스 조인으로 간소화
- Dashboard API 연동 오류 수정 — 캐시 모니터링 + 렌더링 안전성
- MonitoringController 레거시 타일 쿼리 → AIS 위치/항적 기반 전환
### 변경
- SignalKindCode 매핑 규칙 개선 — aton/tug/tender→DEFAULT, shipName BUOY 검출 추가
- 응답 경로 signal_kind_code 치환 1회화 — 캐시 저장 시 치환, 응답 시 DB/캐시 값 직접 사용
- ChunkedTrackStreamingService 전수 최적화 — isQueryCancelled 버그수정, QueryContext 스레드 안전성, 쿼리 메트릭 DB 저장, 데드코드 400줄 삭제, VesselInfo N+1 해소
- API 응답 크기 최적화 — gzip 압축, NON_NULL, 정밀도 제한
- API 응답 최적화 + 점진적 렌더링 + 해구 choropleth 지도
- Hourly Job 인메모리 병합 전환 — N+1 SQL 제거
- Daily Job 인메모리 캐시 기반 최적화 — N+1 SQL 제거
- L1/L2 캐시 maxSize 실측 기반 상향 (L2 3.5M→7M)
- SNP API 전환 및 레거시 코드 전면 정리
### 기타
- Gitea Actions CI/CD 파이프라인 + systemd 서비스 구성
- 팀 워크플로우 v1.2.0→v1.5.0 동기화
- Swagger UI 현행화 — 서버 URL, DTO @Schema, @Parameter

파일 보기

@ -6,6 +6,10 @@ import type {
HaeguStat, HaeguStat,
MetricsSummary, MetricsSummary,
ProcessingDelay, ProcessingDelay,
QueryMetricsPage,
QueryMetricsParams,
QueryMetricsSummary,
QueryMetricsTimeSeries,
ThroughputMetrics, ThroughputMetrics,
} from './types.ts' } from './types.ts'
@ -45,4 +49,26 @@ export const monitorApi = {
getHaeguStats(): Promise<Record<string, unknown>[]> { getHaeguStats(): Promise<Record<string, unknown>[]> {
return fetchJson('/admin/haegu/stats') return fetchJson('/admin/haegu/stats')
}, },
getQueryMetricsHistory(params: QueryMetricsParams): Promise<QueryMetricsPage> {
const qs = new URLSearchParams()
if (params.queryType) qs.set('queryType', params.queryType)
if (params.dataPath) qs.set('dataPath', params.dataPath)
if (params.status) qs.set('status', params.status)
if (params.elapsedMsMin != null) qs.set('elapsedMsMin', String(params.elapsedMsMin))
if (params.elapsedMsMax != null) qs.set('elapsedMsMax', String(params.elapsedMsMax))
qs.set('page', String(params.page ?? 0))
qs.set('size', String(params.size ?? 20))
qs.set('sortBy', params.sortBy ?? 'created_at')
qs.set('sortDir', params.sortDir ?? 'desc')
return fetchJson(`/api/monitoring/query-metrics/history?${qs}`)
},
getQueryMetricsSummary(hours = 24): Promise<QueryMetricsSummary> {
return fetchJson(`/api/monitoring/query-metrics/summary?hours=${hours}`)
},
getQueryMetricsTimeSeries(days = 7, groupBy: 'ip' | 'id' = 'ip'): Promise<QueryMetricsTimeSeries> {
return fetchJson(`/api/monitoring/query-metrics/timeseries?days=${days}&groupBy=${groupBy}`)
},
} }

파일 보기

@ -187,6 +187,97 @@ export interface ThroughputMetrics {
partitionSizes: PartitionSize[] partitionSizes: PartitionSize[]
} }
/* Query Metrics (쿼리 이력) */
export interface QueryMetricRow {
query_id: string
query_type: string
created_at: string
data_path: string
status: string
zoom_level: number | null
requested_mmsi: number
unique_vessels: number
total_points: number
points_after_simplify: number
total_chunks: number
response_bytes: number
elapsed_ms: number
db_query_ms: number
simplify_ms: number
cache_hit_days: number
db_query_days: number
client_ip: string | null
client_id: string | null
}
export interface QueryMetricsPage {
content: QueryMetricRow[]
totalElements: number
totalPages: number
currentPage: number
pageSize: number
}
export interface QueryMetricsSummary {
total_queries: number
avg_elapsed_ms: number
p95_elapsed_ms: number
max_elapsed_ms: number
ws_count: number
rest_count: number
cache_only_count: number
db_only_count: number
hybrid_count: number
completed_count: number
failed_count: number
avg_vessels: number
avg_points_before: number
avg_points_after: number
avg_response_size_bytes: number
}
/* Query Metrics TimeSeries */
export interface TimeSeriesBucket {
bucket: string
query_count: number
avg_elapsed_ms: number
max_elapsed_ms: number
avg_response_bytes: number
ws_count: number
rest_count: number
cache_count: number
db_count: number
hybrid_count: number
}
export interface TopClient {
client: string
client_ip?: string
query_count: number
avg_elapsed_ms: number
}
export interface QueryMetricsTimeSeries {
buckets: TimeSeriesBucket[]
topClients: TopClient[]
granularity: 'HOURLY' | 'DAILY'
groupBy?: 'ip' | 'id'
}
export interface QueryMetricsParams {
queryType?: string
dataPath?: string
status?: string
elapsedMsMin?: number
elapsedMsMax?: number
page?: number
size?: number
sortBy?: string
sortDir?: 'asc' | 'desc'
}
/* Monitor — Data Quality */ /* Monitor — Data Quality */
export interface DataQuality { export interface DataQuality {

파일 보기

@ -21,6 +21,7 @@ interface LineChartProps {
xKey: string xKey: string
height?: number height?: number
label?: string label?: string
yFormatter?: (value: number) => string
} }
export default function LineChart({ export default function LineChart({
@ -29,6 +30,7 @@ export default function LineChart({
xKey, xKey,
height = 240, height = 240,
label, label,
yFormatter,
}: LineChartProps) { }: LineChartProps) {
return ( return (
<div> <div>
@ -46,6 +48,7 @@ export default function LineChart({
tick={{ fontSize: 12, fill: 'var(--sb-text-muted)' }} tick={{ fontSize: 12, fill: 'var(--sb-text-muted)' }}
axisLine={false} axisLine={false}
tickLine={false} tickLine={false}
tickFormatter={yFormatter}
/> />
<Tooltip <Tooltip
contentStyle={{ contentStyle={{
@ -54,6 +57,7 @@ export default function LineChart({
borderRadius: 'var(--sb-radius)', borderRadius: 'var(--sb-radius)',
fontSize: 12, fontSize: 12,
}} }}
formatter={yFormatter ? (v: number) => yFormatter(v) : undefined}
/> />
{series.length > 1 && ( {series.length > 1 && (
<Legend <Legend

파일 보기

@ -16,6 +16,10 @@ interface DataTableProps<T> {
onRowClick?: (row: T) => void onRowClick?: (row: T) => void
emptyMessage?: string emptyMessage?: string
pageSize?: number pageSize?: number
// Server-side pagination (optional)
totalElements?: number
currentPage?: number
onPageChange?: (page: number) => void
} }
export default function DataTable<T>({ export default function DataTable<T>({
@ -25,14 +29,19 @@ export default function DataTable<T>({
onRowClick, onRowClick,
emptyMessage, emptyMessage,
pageSize = 20, pageSize = 20,
totalElements,
currentPage,
onPageChange,
}: DataTableProps<T>) { }: DataTableProps<T>) {
const { t } = useI18n() const { t } = useI18n()
const [sortKey, setSortKey] = useState<string | null>(null) const [sortKey, setSortKey] = useState<string | null>(null)
const [sortAsc, setSortAsc] = useState(true) const [sortAsc, setSortAsc] = useState(true)
const [page, setPage] = useState(0) const [page, setPage] = useState(0)
const isServerSide = totalElements != null && currentPage != null && onPageChange != null
const sorted = useMemo(() => { const sorted = useMemo(() => {
if (!sortKey) return data if (isServerSide || !sortKey) return data
return [...data].sort((a, b) => { return [...data].sort((a, b) => {
const av = (a as Record<string, unknown>)[sortKey] const av = (a as Record<string, unknown>)[sortKey]
const bv = (b as Record<string, unknown>)[sortKey] const bv = (b as Record<string, unknown>)[sortKey]
@ -40,10 +49,12 @@ export default function DataTable<T>({
const cmp = av < bv ? -1 : av > bv ? 1 : 0 const cmp = av < bv ? -1 : av > bv ? 1 : 0
return sortAsc ? cmp : -cmp return sortAsc ? cmp : -cmp
}) })
}, [data, sortKey, sortAsc]) }, [data, sortKey, sortAsc, isServerSide])
const totalPages = Math.ceil(sorted.length / pageSize) const effectivePage = isServerSide ? currentPage! : page
const paged = sorted.slice(page * pageSize, (page + 1) * pageSize) const total = isServerSide ? totalElements! : sorted.length
const totalPages = Math.ceil(total / pageSize)
const paged = isServerSide ? sorted : sorted.slice(effectivePage * pageSize, (effectivePage + 1) * pageSize)
const handleSort = (key: string) => { const handleSort = (key: string) => {
if (sortKey === key) { if (sortKey === key) {
@ -54,6 +65,14 @@ export default function DataTable<T>({
} }
} }
const handlePageChange = (newPage: number) => {
if (isServerSide) {
onPageChange!(newPage)
} else {
setPage(newPage)
}
}
return ( return (
<div> <div>
<div className="sb-table-wrapper"> <div className="sb-table-wrapper">
@ -67,7 +86,7 @@ export default function DataTable<T>({
style={{ textAlign: col.align ?? 'left', cursor: col.sortable !== false ? 'pointer' : 'default' }} style={{ textAlign: col.align ?? 'left', cursor: col.sortable !== false ? 'pointer' : 'default' }}
> >
{col.label} {col.label}
{sortKey === col.key && (sortAsc ? ' \u25B2' : ' \u25BC')} {sortKey === col.key && (sortAsc ? ' ▲' : ' ▼')}
</th> </th>
))} ))}
</tr> </tr>
@ -102,19 +121,19 @@ export default function DataTable<T>({
{totalPages > 1 && ( {totalPages > 1 && (
<div className="mt-3 flex items-center justify-between text-sm text-muted"> <div className="mt-3 flex items-center justify-between text-sm text-muted">
<span> <span>
{sorted.length}{t('common.items')} {t('common.of')} {page * pageSize + 1}-{Math.min((page + 1) * pageSize, sorted.length)} {total}{t('common.items')} {t('common.of')} {effectivePage * pageSize + 1}-{Math.min((effectivePage + 1) * pageSize, total)}
</span> </span>
<div className="flex gap-1"> <div className="flex gap-1">
<button <button
onClick={() => setPage(p => Math.max(0, p - 1))} onClick={() => handlePageChange(Math.max(0, effectivePage - 1))}
disabled={page === 0} disabled={effectivePage === 0}
className="rounded border border-border px-2 py-1 disabled:opacity-40" className="rounded border border-border px-2 py-1 disabled:opacity-40"
> >
{t('common.prev')} {t('common.prev')}
</button> </button>
<button <button
onClick={() => setPage(p => Math.min(totalPages - 1, p + 1))} onClick={() => handlePageChange(Math.min(totalPages - 1, effectivePage + 1))}
disabled={page >= totalPages - 1} disabled={effectivePage >= totalPages - 1}
className="rounded border border-border px-2 py-1 disabled:opacity-40" className="rounded border border-border px-2 py-1 disabled:opacity-40"
> >
{t('common.next')} {t('common.next')}

파일 보기

@ -49,6 +49,16 @@ const en = {
'dashboard.hits': 'Hits', 'dashboard.hits': 'Hits',
'dashboard.misses': 'Misses', 'dashboard.misses': 'Misses',
'dashboard.dailyVolume': 'Daily Processing Volume', 'dashboard.dailyVolume': 'Daily Processing Volume',
'dashboard.queryPerformance': 'Query Performance',
'dashboard.responseTimeTrend': 'Response Time Trend',
'dashboard.queryVolume': 'Query Volume',
'dashboard.cachePathRatio': 'Cache Path Ratio',
'dashboard.responseSizeTrend': 'Response Size Trend',
'dashboard.topClients': 'Top Clients',
'dashboard.avgElapsed': 'Avg',
'dashboard.maxElapsed': 'Max',
'dashboard.queries': 'queries',
'dashboard.noChartData': 'No chart data available',
// Job Monitor // Job Monitor
'jobs.title': 'Job Monitor', 'jobs.title': 'Job Monitor',
@ -170,8 +180,26 @@ const en = {
'metrics.cacheHitSummary': 'Cache Hit Summary', 'metrics.cacheHitSummary': 'Cache Hit Summary',
'metrics.hits': 'Hits', 'metrics.hits': 'Hits',
'metrics.misses': 'Misses', 'metrics.misses': 'Misses',
'metrics.dbMetricsPlaceholder': 'API/WS History Metrics (Coming Soon)', 'metrics.queryHistory': 'Query History',
'metrics.dbMetricsDesc': 'REST/WebSocket request history, response sizes, latency DB storage + query', 'metrics.totalQueries': 'Total Queries',
'metrics.avgElapsed': 'Avg Response',
'metrics.p95Elapsed': 'P95 Response',
'metrics.cacheHitRate': 'Cache Hit Rate',
'metrics.queryType': 'Type',
'metrics.dataPath': 'Path',
'metrics.queryStatus': 'Status',
'metrics.queryTime': 'Time',
'metrics.vessels': 'Vessels',
'metrics.pointsBefore': 'Points(Before)',
'metrics.pointsAfter': 'Points(After)',
'metrics.simplification': 'Reduction',
'metrics.chunks': 'Chunks',
'metrics.elapsed': 'Elapsed',
'metrics.allTypes': 'All',
'metrics.allPaths': 'All',
'metrics.resetFilters': 'Reset Filters',
'metrics.responseSize': 'Size',
'metrics.clientIp': 'IP',
// Time Range // Time Range
'range.1d': '1D', 'range.1d': '1D',

파일 보기

@ -49,6 +49,16 @@ const ko = {
'dashboard.hits': '히트', 'dashboard.hits': '히트',
'dashboard.misses': '미스', 'dashboard.misses': '미스',
'dashboard.dailyVolume': '일별 처리량', 'dashboard.dailyVolume': '일별 처리량',
'dashboard.queryPerformance': '쿼리 성능',
'dashboard.responseTimeTrend': '응답시간 추이',
'dashboard.queryVolume': '쿼리 볼륨',
'dashboard.cachePathRatio': '캐시/경로 비율',
'dashboard.responseSizeTrend': '응답 크기 추이',
'dashboard.topClients': 'Top 클라이언트',
'dashboard.avgElapsed': '평균',
'dashboard.maxElapsed': '최대',
'dashboard.queries': '건',
'dashboard.noChartData': '차트 데이터가 없습니다',
// Job Monitor // Job Monitor
'jobs.title': 'Job 모니터', 'jobs.title': 'Job 모니터',
@ -170,8 +180,26 @@ const ko = {
'metrics.cacheHitSummary': '캐시 히트 요약', 'metrics.cacheHitSummary': '캐시 히트 요약',
'metrics.hits': '히트', 'metrics.hits': '히트',
'metrics.misses': '미스', 'metrics.misses': '미스',
'metrics.dbMetricsPlaceholder': 'API/WS 이력 메트릭 (향후 구현)', 'metrics.queryHistory': '쿼리 이력',
'metrics.dbMetricsDesc': 'REST/WebSocket 요청 이력, 응답 크기, 소요시간 DB 저장 + 조회', 'metrics.totalQueries': '총 쿼리',
'metrics.avgElapsed': '평균 응답',
'metrics.p95Elapsed': 'P95 응답',
'metrics.cacheHitRate': '캐시 적중률',
'metrics.queryType': '유형',
'metrics.dataPath': '경로',
'metrics.queryStatus': '상태',
'metrics.queryTime': '시각',
'metrics.vessels': '선박',
'metrics.pointsBefore': '포인트(전)',
'metrics.pointsAfter': '포인트(후)',
'metrics.simplification': '간소화',
'metrics.chunks': '청크',
'metrics.elapsed': '응답시간',
'metrics.allTypes': '전체',
'metrics.allPaths': '전체',
'metrics.resetFilters': '필터 초기화',
'metrics.responseSize': '응답 크기',
'metrics.clientIp': 'IP',
// Time Range // Time Range
'range.1d': '1일', 'range.1d': '1일',

파일 보기

@ -1,12 +1,22 @@
import { useState, useCallback } from 'react'
import { usePoller } from '../hooks/usePoller.ts' import { usePoller } from '../hooks/usePoller.ts'
import { useCachedState } from '../hooks/useCachedState.ts' import { useCachedState } from '../hooks/useCachedState.ts'
import { useI18n } from '../hooks/useI18n.ts' import { useI18n } from '../hooks/useI18n.ts'
import { monitorApi } from '../api/monitorApi.ts' import { monitorApi } from '../api/monitorApi.ts'
import type { MetricsSummary, CacheStats, ProcessingDelay, CacheDetails } from '../api/types.ts' import type { MetricsSummary, CacheStats, ProcessingDelay, CacheDetails, QueryMetricsPage, QueryMetricsSummary, QueryMetricsParams, QueryMetricRow } from '../api/types.ts'
import MetricCard from '../components/charts/MetricCard.tsx' import MetricCard from '../components/charts/MetricCard.tsx'
import { formatNumber } from '../utils/formatters.ts' import DataTable, { type Column } from '../components/common/DataTable.tsx'
import { formatNumber, formatBytes } from '../utils/formatters.ts'
const POLL_INTERVAL = 10_000 const POLL_INTERVAL = 10_000
const QUERY_POLL_INTERVAL = 30_000
const ELAPSED_RANGES = [
{ label: '< 1s', min: undefined, max: 999 },
{ label: '1-5s', min: 1000, max: 5000 },
{ label: '5-30s', min: 5000, max: 30000 },
{ label: '> 30s', min: 30000, max: undefined },
] as const
export default function ApiMetrics() { export default function ApiMetrics() {
const { t } = useI18n() const { t } = useI18n()
@ -15,6 +25,13 @@ export default function ApiMetrics() {
const [cacheDetails, setCacheDetails] = useCachedState<CacheDetails | null>('api.cacheDetail', null) const [cacheDetails, setCacheDetails] = useCachedState<CacheDetails | null>('api.cacheDetail', null)
const [delay, setDelay] = useCachedState<ProcessingDelay | null>('api.delay', null) const [delay, setDelay] = useCachedState<ProcessingDelay | null>('api.delay', null)
// Query History state
const [filter, setFilter] = useState<QueryMetricsParams>({
page: 0, size: 20, sortBy: 'created_at', sortDir: 'desc',
})
const [historyData, setHistoryData] = useState<QueryMetricsPage | null>(null)
const [summaryData, setSummaryData] = useState<QueryMetricsSummary | null>(null)
usePoller(() => { usePoller(() => {
monitorApi.getMetricsSummary().then(setMetrics).catch(() => {}) monitorApi.getMetricsSummary().then(setMetrics).catch(() => {})
monitorApi.getCacheStats().then(setCache).catch(() => {}) monitorApi.getCacheStats().then(setCache).catch(() => {})
@ -22,10 +39,109 @@ export default function ApiMetrics() {
monitorApi.getDelay().then(setDelay).catch(() => {}) monitorApi.getDelay().then(setDelay).catch(() => {})
}, POLL_INTERVAL) }, POLL_INTERVAL)
const fetchQueryData = useCallback(() => {
monitorApi.getQueryMetricsHistory(filter).then(setHistoryData).catch(() => {})
monitorApi.getQueryMetricsSummary(24).then(setSummaryData).catch(() => {})
}, [filter])
usePoller(fetchQueryData, QUERY_POLL_INTERVAL, [filter])
const updateFilter = (patch: Partial<QueryMetricsParams>) => {
setFilter(prev => ({ ...prev, page: 0, ...patch }))
}
const resetFilters = () => {
setFilter({ page: 0, size: 20, sortBy: 'created_at', sortDir: 'desc' })
}
const memUsed = metrics?.memory.used ?? 0 const memUsed = metrics?.memory.used ?? 0
const memMax = metrics?.memory.max ?? 1 const memMax = metrics?.memory.max ?? 1
const memPct = Math.round((memUsed / memMax) * 100) const memPct = Math.round((memUsed / memMax) * 100)
// Summary computed values
const totalQueries = summaryData?.total_queries ?? 0
const cacheHitRate = totalQueries > 0
? ((summaryData?.cache_only_count ?? 0) / totalQueries * 100).toFixed(1)
: '0.0'
const historyColumns: Column<QueryMetricRow>[] = [
{
key: 'created_at', label: t('metrics.queryTime'), sortable: false,
render: (row) => {
if (!row.created_at) return '-'
const d = new Date(row.created_at)
// UTC → KST (+9h)
const kst = new Date(d.getTime() + 9 * 60 * 60 * 1000)
const mm = String(kst.getUTCMonth() + 1).padStart(2, '0')
const dd = String(kst.getUTCDate()).padStart(2, '0')
const hh = String(kst.getUTCHours()).padStart(2, '0')
const mi = String(kst.getUTCMinutes()).padStart(2, '0')
const ss = String(kst.getUTCSeconds()).padStart(2, '0')
return `${mm}-${dd} ${hh}:${mi}:${ss}`
},
},
{
key: 'query_type', label: t('metrics.queryType'), sortable: false,
render: (row) => {
const isWs = row.query_type === 'WEBSOCKET'
return <span className={`inline-block rounded px-1.5 py-0.5 text-xs font-medium ${isWs ? 'bg-blue-100 text-blue-700 dark:bg-blue-900 dark:text-blue-300' : 'bg-emerald-100 text-emerald-700 dark:bg-emerald-900 dark:text-emerald-300'}`}>{isWs ? 'WS' : 'REST'}</span>
},
},
{
key: 'data_path', label: t('metrics.dataPath'), sortable: false,
render: (row) => {
const path = row.data_path ?? ''
const color = path === 'CACHE' ? 'bg-emerald-100 text-emerald-700 dark:bg-emerald-900 dark:text-emerald-300'
: path === 'DB' ? 'bg-amber-100 text-amber-700 dark:bg-amber-900 dark:text-amber-300'
: 'bg-violet-100 text-violet-700 dark:bg-violet-900 dark:text-violet-300'
return <span className={`inline-block rounded px-1.5 py-0.5 text-xs font-medium ${color}`}>{path}</span>
},
},
{
key: 'status', label: t('metrics.queryStatus'), sortable: false,
render: (row) => {
const ok = row.status === 'COMPLETED'
return <span className={`inline-block rounded px-1.5 py-0.5 text-xs font-medium ${ok ? 'bg-emerald-100 text-emerald-700 dark:bg-emerald-900 dark:text-emerald-300' : 'bg-red-100 text-red-700 dark:bg-red-900 dark:text-red-300'}`}>{row.status}</span>
},
},
{ key: 'unique_vessels', label: t('metrics.vessels'), align: 'right' as const, sortable: false,
render: (row) => formatNumber(row.unique_vessels) },
{ key: 'total_points', label: t('metrics.pointsBefore'), align: 'right' as const, sortable: false,
render: (row) => formatNumber(row.total_points) },
{ key: 'points_after_simplify', label: t('metrics.pointsAfter'), align: 'right' as const, sortable: false,
render: (row) => formatNumber(row.points_after_simplify) },
{
key: 'reduction', label: t('metrics.simplification'), align: 'right' as const, sortable: false,
render: (row) => {
const before = row.total_points || 0
const after = row.points_after_simplify || 0
if (before === 0) return '-'
return `${((1 - after / before) * 100).toFixed(0)}%`
},
},
{ key: 'total_chunks', label: t('metrics.chunks'), align: 'right' as const, sortable: false },
{
key: 'elapsed_ms', label: t('metrics.elapsed'), align: 'right' as const, sortable: false,
render: (row) => {
const ms = row.elapsed_ms || 0
const color = ms < 1000 ? 'text-success' : ms < 5000 ? 'text-warning' : 'text-danger'
return <span className={`font-mono font-medium ${color}`}>{ms < 1000 ? `${ms}ms` : `${(ms / 1000).toFixed(1)}s`}</span>
},
},
{
key: 'response_bytes', label: t('metrics.responseSize'), align: 'right' as const, sortable: false,
render: (row) => row.response_bytes ? formatBytes(row.response_bytes) : '-',
},
{
key: 'client_ip', label: t('metrics.clientIp'), sortable: false,
render: (row) => row.client_ip ? <span className="font-mono text-xs">{row.client_ip}</span> : '-',
},
{
key: 'client_id', label: 'ID', sortable: false,
render: (row) => row.client_id ? <span className="font-mono text-xs">{row.client_id}</span> : '-',
},
]
return ( return (
<div className="space-y-6 fade-in"> <div className="space-y-6 fade-in">
<h1 className="text-2xl font-bold">{t('metrics.title')}</h1> <h1 className="text-2xl font-bold">{t('metrics.title')}</h1>
@ -178,12 +294,114 @@ export default function ApiMetrics() {
</div> </div>
</div> </div>
{/* Placeholder for future DB-based metrics */} {/* Query History Section */}
<div className="sb-card border-dashed"> <div className="sb-card">
<div className="py-6 text-center text-sm text-muted"> <div className="sb-card-header">{t('metrics.queryHistory')}</div>
<p>{t('metrics.dbMetricsPlaceholder')}</p>
<p className="mt-1 text-xs opacity-60">{t('metrics.dbMetricsDesc')}</p> {/* Summary Cards */}
<div className="mb-4 grid grid-cols-2 gap-3 lg:grid-cols-4">
<MetricCard
title={t('metrics.totalQueries')}
value={summaryData ? formatNumber(totalQueries) : '-'}
subtitle={summaryData ? `WS:${summaryData.ws_count} / REST:${summaryData.rest_count}` : undefined}
/>
<MetricCard
title={t('metrics.avgElapsed')}
value={summaryData ? `${((summaryData.avg_elapsed_ms ?? 0) / 1000).toFixed(1)}s` : '-'}
/>
<MetricCard
title={t('metrics.p95Elapsed')}
value={summaryData ? `${((summaryData.p95_elapsed_ms ?? 0) / 1000).toFixed(1)}s` : '-'}
/>
<MetricCard
title={t('metrics.cacheHitRate')}
value={summaryData ? `${cacheHitRate}%` : '-'}
subtitle={summaryData ? `C:${summaryData.cache_only_count}/DB:${summaryData.db_only_count}/H:${summaryData.hybrid_count}` : undefined}
/>
</div> </div>
{/* Filters */}
<div className="mb-4 flex flex-wrap items-center gap-3 text-sm">
{/* Query Type toggle */}
<div className="flex items-center gap-1">
<span className="text-muted mr-1">{t('metrics.queryType')}:</span>
{[undefined, 'WEBSOCKET', 'REST_V2'].map((val) => (
<button
type="button"
key={val ?? 'all'}
onClick={() => updateFilter({ queryType: val })}
className={`rounded px-2 py-1 text-xs font-medium transition ${
filter.queryType === val
? 'bg-primary text-white'
: 'bg-surface-secondary text-muted hover:bg-surface-tertiary'
}`}
>
{val == null ? t('metrics.allTypes') : val === 'WEBSOCKET' ? 'WS' : 'REST'}
</button>
))}
</div>
{/* Data Path toggle */}
<div className="flex items-center gap-1">
<span className="text-muted mr-1">{t('metrics.dataPath')}:</span>
{[undefined, 'CACHE', 'DB', 'HYBRID'].map((val) => (
<button
type="button"
key={val ?? 'all'}
onClick={() => updateFilter({ dataPath: val })}
className={`rounded px-2 py-1 text-xs font-medium transition ${
filter.dataPath === val
? 'bg-primary text-white'
: 'bg-surface-secondary text-muted hover:bg-surface-tertiary'
}`}
>
{val ?? t('metrics.allPaths')}
</button>
))}
</div>
{/* Elapsed Time select */}
<select
title={t('metrics.elapsed')}
value={filter.elapsedMsMin != null ? `${filter.elapsedMsMin}-${filter.elapsedMsMax ?? ''}` : ''}
onChange={(e) => {
if (!e.target.value) {
updateFilter({ elapsedMsMin: undefined, elapsedMsMax: undefined })
} else {
const range = ELAPSED_RANGES.find(r =>
`${r.min ?? ''}-${r.max ?? ''}` === e.target.value
)
if (range) updateFilter({ elapsedMsMin: range.min, elapsedMsMax: range.max })
}
}}
className="rounded border border-border bg-surface px-2 py-1 text-xs"
>
<option value="">{t('metrics.elapsed')}: {t('metrics.allTypes')}</option>
{ELAPSED_RANGES.map((r) => (
<option key={r.label} value={`${r.min ?? ''}-${r.max ?? ''}`}>{r.label}</option>
))}
</select>
{/* Reset */}
<button
type="button"
onClick={resetFilters}
className="rounded border border-border px-2 py-1 text-xs text-muted hover:bg-surface-secondary"
>
{t('metrics.resetFilters')}
</button>
</div>
{/* History Table */}
<DataTable<QueryMetricRow>
columns={historyColumns}
data={historyData?.content ?? []}
keyExtractor={(row) => row.query_id}
pageSize={filter.size ?? 20}
totalElements={historyData?.totalElements}
currentPage={historyData?.currentPage}
onPageChange={(p) => setFilter(prev => ({ ...prev, page: p }))}
/>
</div> </div>
</div> </div>
) )

파일 보기

@ -1,4 +1,4 @@
import { useState } from 'react' import { useState, useCallback } from 'react'
import { usePoller } from '../hooks/usePoller.ts' import { usePoller } from '../hooks/usePoller.ts'
import { useCachedState } from '../hooks/useCachedState.ts' import { useCachedState } from '../hooks/useCachedState.ts'
import { useI18n } from '../hooks/useI18n.ts' import { useI18n } from '../hooks/useI18n.ts'
@ -10,11 +10,13 @@ import type {
DailyStats, DailyStats,
MetricsSummary, MetricsSummary,
ProcessingDelay, ProcessingDelay,
QueryMetricsTimeSeries,
RunningJob, RunningJob,
} from '../api/types.ts' } from '../api/types.ts'
import MetricCard from '../components/charts/MetricCard.tsx' import MetricCard from '../components/charts/MetricCard.tsx'
import StatusBadge from '../components/common/StatusBadge.tsx' import StatusBadge from '../components/common/StatusBadge.tsx'
import BarChart from '../components/charts/BarChart.tsx' import BarChart from '../components/charts/BarChart.tsx'
import LineChart from '../components/charts/LineChart.tsx'
import TimeRangeSelector from '../components/common/TimeRangeSelector.tsx' import TimeRangeSelector from '../components/common/TimeRangeSelector.tsx'
import { formatDuration, formatNumber, formatDateTime, formatPercent } from '../utils/formatters.ts' import { formatDuration, formatNumber, formatDateTime, formatPercent } from '../utils/formatters.ts'
@ -28,7 +30,20 @@ export default function Dashboard() {
const [delay, setDelay] = useCachedState<ProcessingDelay | null>('dash.delay', null) const [delay, setDelay] = useCachedState<ProcessingDelay | null>('dash.delay', null)
const [daily, setDaily] = useCachedState<DailyStats | null>('dash.daily', null) const [daily, setDaily] = useCachedState<DailyStats | null>('dash.daily', null)
const [running, setRunning] = useCachedState<RunningJob[]>('dash.running', []) const [running, setRunning] = useCachedState<RunningJob[]>('dash.running', [])
const [queryTs, setQueryTs] = useCachedState<QueryMetricsTimeSeries | null>('dash.queryTs', null)
const [days, setDays] = useState(7) const [days, setDays] = useState(7)
const [clientGroupBy, setClientGroupBy] = useState<'ip' | 'id'>('ip')
const [isQueryChartsOpen, setIsQueryChartsOpen] = useState(() =>
localStorage.getItem('dashboard-query-charts') !== 'collapsed',
)
const toggleQueryCharts = useCallback(() => {
setIsQueryChartsOpen(prev => {
const next = !prev
localStorage.setItem('dashboard-query-charts', next ? 'expanded' : 'collapsed')
return next
})
}, [])
usePoller(() => { usePoller(() => {
batchApi.getStatistics(days).then(setStats).catch(() => {}) batchApi.getStatistics(days).then(setStats).catch(() => {})
@ -37,7 +52,8 @@ export default function Dashboard() {
monitorApi.getDelay().then(setDelay).catch(() => {}) monitorApi.getDelay().then(setDelay).catch(() => {})
batchApi.getDailyStats().then(setDaily).catch(() => {}) batchApi.getDailyStats().then(setDaily).catch(() => {})
batchApi.getRunningJobs().then(setRunning).catch(() => {}) batchApi.getRunningJobs().then(setRunning).catch(() => {})
}, POLL_INTERVAL, [days]) monitorApi.getQueryMetricsTimeSeries(days, clientGroupBy).then(setQueryTs).catch(() => {})
}, POLL_INTERVAL, [days, clientGroupBy])
const memUsage = metrics const memUsage = metrics
? Math.round((metrics.memory.used / metrics.memory.max) * 100) ? Math.round((metrics.memory.used / metrics.memory.max) * 100)
@ -214,6 +230,165 @@ export default function Dashboard() {
/> />
</div> </div>
)} )}
{/* Query Performance Charts */}
<div className="sb-card">
<button
type="button"
className="sb-card-header flex w-full items-center justify-between cursor-pointer"
onClick={toggleQueryCharts}
>
<span>{t('dashboard.queryPerformance')}</span>
<svg
className={`h-5 w-5 text-muted transition-transform ${isQueryChartsOpen ? 'rotate-180' : ''}`}
fill="none" viewBox="0 0 24 24" stroke="currentColor"
>
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M19 9l-7 7-7-7" />
</svg>
</button>
{isQueryChartsOpen && (
<div className="space-y-6 pt-2">
{queryTs && queryTs.buckets.length > 0 ? (
<>
{/* Row 1: Response Time + Query Volume */}
<div className="grid gap-4 lg:grid-cols-2">
<div>
<LineChart
label={t('dashboard.responseTimeTrend')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
avg: Math.round(b.avg_elapsed_ms),
max: Math.round(b.max_elapsed_ms),
}))}
series={[
{ dataKey: 'avg', color: 'var(--sb-primary)', name: t('dashboard.avgElapsed') },
{ dataKey: 'max', color: 'var(--sb-danger)', name: t('dashboard.maxElapsed') },
]}
xKey="time"
height={220}
yFormatter={v => `${v}ms`}
/>
</div>
<div>
<BarChart
label={t('dashboard.queryVolume')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
WS: b.ws_count,
REST: b.rest_count,
}))}
xKey="time"
height={220}
series={[
{ dataKey: 'WS', color: 'var(--sb-primary)', name: 'WebSocket', stackId: 'q' },
{ dataKey: 'REST', color: 'var(--sb-success)', name: 'REST', stackId: 'q' },
]}
/>
</div>
</div>
{/* Row 2: Cache Path + Response Size */}
<div className="grid gap-4 lg:grid-cols-2">
<div>
<BarChart
label={t('dashboard.cachePathRatio')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
Cache: b.cache_count,
DB: b.db_count,
Hybrid: b.hybrid_count,
}))}
xKey="time"
height={220}
series={[
{ dataKey: 'Cache', color: 'var(--sb-success)', stackId: 'p' },
{ dataKey: 'DB', color: 'var(--sb-warning)', stackId: 'p' },
{ dataKey: 'Hybrid', color: 'var(--sb-primary)', stackId: 'p' },
]}
/>
</div>
<div>
<LineChart
label={t('dashboard.responseSizeTrend')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
size: Math.round(b.avg_response_bytes / 1024),
}))}
series={[
{ dataKey: 'size', color: 'var(--sb-primary)', name: 'KB' },
]}
xKey="time"
height={220}
yFormatter={v => `${v}KB`}
/>
</div>
</div>
{/* Top Clients */}
<div>
<div className="mb-2 flex items-center gap-2">
<span className="text-sm font-medium text-muted">{t('dashboard.topClients')}</span>
<div className="flex overflow-hidden rounded-md border border-[var(--border-primary)] text-xs">
<button
type="button"
className={`px-2 py-0.5 transition-colors ${clientGroupBy === 'ip' ? 'bg-[var(--accent-primary)] text-white font-medium' : 'bg-[var(--bg-secondary)] text-[var(--text-secondary)] hover:bg-[var(--bg-hover)]'}`}
onClick={() => setClientGroupBy('ip')}
>IP</button>
<button
type="button"
className={`px-2 py-0.5 transition-colors ${clientGroupBy === 'id' ? 'bg-[var(--accent-primary)] text-white font-medium' : 'bg-[var(--bg-secondary)] text-[var(--text-secondary)] hover:bg-[var(--bg-hover)]'}`}
onClick={() => setClientGroupBy('id')}
>ID</button>
</div>
</div>
{queryTs.topClients.length > 0 ? (
<div className="space-y-2">
{queryTs.topClients.map((c, i) => {
const maxCount = queryTs.topClients[0].query_count
const pct = maxCount > 0 ? (c.query_count / maxCount) * 100 : 0
const label = c.client ?? c.client_ip ?? '-'
return (
<div key={label + i} className="flex items-center gap-3 text-sm">
<span className="w-40 truncate font-mono text-xs" title={label}>{label}</span>
<div className="flex-1">
<div className="h-4 rounded bg-surface-hover">
<div
className="h-4 rounded bg-primary"
style={{ width: `${pct}%` }}
/>
</div>
</div>
<span className="w-20 text-right text-xs text-muted">
{c.query_count}{t('dashboard.queries')} · {Math.round(c.avg_elapsed_ms)}ms
</span>
</div>
)
})}
</div>
) : (
<div className="py-4 text-center text-xs text-muted">
{clientGroupBy === 'id' ? '사용자 ID 데이터가 없습니다' : '클라이언트 데이터가 없습니다'}
</div>
)}
</div>
</>
) : (
<div className="py-8 text-center text-sm text-muted">{t('dashboard.noChartData')}</div>
)}
</div>
)}
</div>
</div> </div>
) )
} }
function formatBucket(bucket: string, granularity: 'HOURLY' | 'DAILY'): string {
if (granularity === 'HOURLY') {
// "2026-03-10T14:00:00" → "14:00"
const timePart = bucket.includes('T') ? bucket.split('T')[1] : bucket
return timePart.slice(0, 5)
}
// "2026-03-10" → "03-10"
return bucket.slice(5, 10)
}

파일 보기

@ -62,6 +62,9 @@ public class DailyAggregationStepConfig {
@Value("${vessel.batch.chunk-size:5000}") @Value("${vessel.batch.chunk-size:5000}")
private int chunkSize; private int chunkSize;
@Value("${vessel.batch.track.include-abnormal-in-tracks:false}")
private boolean includeAbnormalInTracks;
@Bean @Bean
public Step mergeDailyTracksStep() { public Step mergeDailyTracksStep() {
log.info("Building mergeDailyTracksStep with cache-based in-memory merge"); log.info("Building mergeDailyTracksStep with cache-based in-memory merge");
@ -110,7 +113,9 @@ public class DailyAggregationStepConfig {
return new CompositeTrackWriter( return new CompositeTrackWriter(
vesselTrackBulkWriter, vesselTrackBulkWriter,
abnormalTrackWriter, abnormalTrackWriter,
"daily" "daily",
null,
includeAbnormalInTracks
); );
} }

파일 보기

@ -69,6 +69,9 @@ public class HourlyAggregationStepConfig {
@Value("${vessel.batch.chunk-size:5000}") @Value("${vessel.batch.chunk-size:5000}")
private int chunkSize; private int chunkSize;
@Value("${vessel.batch.track.include-abnormal-in-tracks:false}")
private boolean includeAbnormalInTracks;
// //
// Step 1: 5분 시간 병합 (인메모리 캐시 기반) // Step 1: 5분 시간 병합 (인메모리 캐시 기반)
// //
@ -122,7 +125,8 @@ public class HourlyAggregationStepConfig {
vesselTrackBulkWriter, vesselTrackBulkWriter,
abnormalTrackWriter, abnormalTrackWriter,
"hourly", "hourly",
hourlyTrackCache hourlyTrackCache,
includeAbnormalInTracks
); );
} }

파일 보기

@ -97,10 +97,10 @@ public class VesselBatchScheduler {
} }
/** /**
* S&P AIS API 수집 ( 1분 15초) * S&P AIS API 수집 ( 1분 45초)
* 캐시에 최신 위치 저장 5분 집계 Job에서 활용 * API 서버 데이터 적재 완료 안정 구간(:45초~) 요청
*/ */
@Scheduled(cron = "15 * * * * *") @Scheduled(cron = "45 * * * * *")
public void runAisTargetImport() { public void runAisTargetImport() {
if (!schedulerEnabled || shutdownRequested || aisTargetImportJob == null) { if (!schedulerEnabled || shutdownRequested || aisTargetImportJob == null) {
return; return;

파일 보기

@ -96,6 +96,9 @@ public class VesselTrackStepConfig {
@Value("${vessel.batch.chunk-size:1000}") @Value("${vessel.batch.chunk-size:1000}")
private int chunkSize; private int chunkSize;
@Value("${vessel.batch.track.include-abnormal-in-tracks:false}")
private boolean includeAbnormalInTracks;
@PostConstruct @PostConstruct
public void init() { public void init() {
// 5분 Job의 이름을 명시적으로 설정 // 5분 Job의 이름을 명시적으로 설정
@ -203,18 +206,21 @@ public class VesselTrackStepConfig {
log.warn("비정상 궤적 감지 [{}]: vessel={}, avg_speed={}, distance={}", log.warn("비정상 궤적 감지 [{}]: vessel={}, avg_speed={}, distance={}",
abnormalReason, track.getVesselKey(), track.getAvgSpeed(), track.getDistanceNm()); abnormalReason, track.getVesselKey(), track.getAvgSpeed(), track.getDistanceNm());
saveAbnormalTrack(track, abnormalReason); saveAbnormalTrack(track, abnormalReason);
if (includeAbnormalInTracks) {
filteredTracks.add(track); // 플래그 true 정상 테이블+캐시에도 포함
}
} else { } else {
filteredTracks.add(track); filteredTracks.add(track);
}
// 정상 궤적의 종료 위치 저장 (캐시 업데이트용) // 궤적의 종료 위치 저장 (캐시 업데이트용) 비정상 포함 시에도 위치 추적
if (track.getEndPosition() != null) { if (filteredTracks.contains(track) && track.getEndPosition() != null) {
currentBucketEndPositions.put(track.getMmsi(), VesselBucketPositionDto.builder() currentBucketEndPositions.put(track.getMmsi(), VesselBucketPositionDto.builder()
.mmsi(track.getMmsi()) .mmsi(track.getMmsi())
.endLon(track.getEndPosition().getLon()) .endLon(track.getEndPosition().getLon())
.endLat(track.getEndPosition().getLat()) .endLat(track.getEndPosition().getLat())
.endTime(track.getEndPosition().getTime()) .endTime(track.getEndPosition().getTime())
.build()); .build());
}
} }
} }

파일 보기

@ -46,7 +46,7 @@ public class AisTargetCacheManager {
@Value("${app.cache.ais-target.ttl-minutes:120}") @Value("${app.cache.ais-target.ttl-minutes:120}")
private long ttlMinutes; private long ttlMinutes;
@Value("${app.cache.ais-target.max-size:300000}") @Value("${app.cache.ais-target.max-size:500000}")
private int maxSize; private int maxSize;
@PostConstruct @PostConstruct

파일 보기

@ -107,7 +107,7 @@ public class ChnPrmShipCacheWarmer implements ApplicationRunner {
entities.forEach(entity -> { entities.forEach(entity -> {
if (entity.getSignalKindCode() == null) { if (entity.getSignalKindCode() == null) {
SignalKindCode kindCode = SignalKindCode.resolve( SignalKindCode kindCode = SignalKindCode.resolve(
entity.getVesselType(), entity.getExtraInfo()); entity.getVesselType(), entity.getExtraInfo(), entity.getName());
entity.setSignalKindCode(kindCode.getCode()); entity.setSignalKindCode(kindCode.getCode());
} }
}); });

파일 보기

@ -60,7 +60,7 @@ public class FiveMinTrackCache {
for (VesselTrack track : tracks) { for (VesselTrack track : tracks) {
put(track); put(track);
} }
log.info("[CACHE-MONITOR] L1.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]", log.debug("[CACHE-MONITOR] L1.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]",
tracks.size(), beforeSize, cache.estimatedSize(), getStats()); tracks.size(), beforeSize, cache.estimatedSize(), getStats());
} }
@ -89,11 +89,55 @@ public class FiveMinTrackCache {
} }
int totalTracks = result.values().stream().mapToInt(List::size).sum(); int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.info("[CACHE-MONITOR] L1.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}", log.debug("[CACHE-MONITOR] L1.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}",
start, end, result.size(), totalTracks, cache.estimatedSize()); start, end, result.size(), totalTracks, cache.estimatedSize());
return result; return result;
} }
/**
* 요청된 MMSI 키로 직접 O(1) 조회 mmsi×5minBucket 조합으로 Caffeine getIfPresent() 호출
* 기존 getTracksInRange() 전체 스캔(O(n)) 대비 대폭 성능 개선.
* : 1시간 × 100 MMSI = 1,200회 get() vs 최대 1.5M 엔트리 스캔
*/
public Map<String, List<VesselTrack>> getTracksForVessels(
LocalDateTime start, LocalDateTime end, Set<String> mmsiKeys) {
if (mmsiKeys == null || mmsiKeys.isEmpty()) {
return Collections.emptyMap();
}
Map<String, List<VesselTrack>> result = new LinkedHashMap<>();
// 5분 단위 버킷 정렬 (start를 가장 가까운 5분 바닥으로 정렬)
int startMinute = (start.getMinute() / 5) * 5;
LocalDateTime bucket = start.withMinute(startMinute).withSecond(0).withNano(0);
int lookupCount = 0;
int hitCount = 0;
while (!bucket.isAfter(end) && bucket.isBefore(end)) {
for (String mmsi : mmsiKeys) {
String key = buildKey(mmsi, bucket);
VesselTrack track = cache.getIfPresent(key);
lookupCount++;
if (track != null) {
result.computeIfAbsent(mmsi, k -> new ArrayList<>()).add(track);
hitCount++;
}
}
bucket = bucket.plusMinutes(5);
}
// MMSI별 시간순 정렬
for (List<VesselTrack> tracks : result.values()) {
tracks.sort(Comparator.comparing(VesselTrack::getTimeBucket));
}
int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.debug("[CACHE-MONITOR] L1.getTracksForVessels [{}, {}): requestedMmsi={}, lookups={}, hits={}, resultMmsi={}, tracks={}",
start, end, mmsiKeys.size(), lookupCount, hitCount, result.size(), totalTracks);
return result;
}
/** /**
* 지정 시간 범위의 캐시 항목 제거 (hourly merge 완료 호출) * 지정 시간 범위의 캐시 항목 제거 (hourly merge 완료 호출)
*/ */

파일 보기

@ -11,6 +11,7 @@ import org.springframework.stereotype.Component;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter; import java.time.format.DateTimeFormatter;
import java.util.*; import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
/** /**
@ -31,6 +32,9 @@ public class HourlyTrackCache {
private Cache<String, VesselTrack> cache; private Cache<String, VesselTrack> cache;
// 간소화 완료 추적 (시간 버킷 단위, 중복 간소화 방지)
private final Set<LocalDateTime> simplifiedBuckets = ConcurrentHashMap.newKeySet();
@Value("${app.cache.hourly-track.ttl-hours:26}") @Value("${app.cache.hourly-track.ttl-hours:26}")
private long ttlHours; private long ttlHours;
@ -60,7 +64,7 @@ public class HourlyTrackCache {
for (VesselTrack track : tracks) { for (VesselTrack track : tracks) {
put(track); put(track);
} }
log.info("[CACHE-MONITOR] L2.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]", log.debug("[CACHE-MONITOR] L2.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]",
tracks.size(), beforeSize, cache.estimatedSize(), getStats()); tracks.size(), beforeSize, cache.estimatedSize(), getStats());
} }
@ -88,11 +92,52 @@ public class HourlyTrackCache {
} }
int totalTracks = result.values().stream().mapToInt(List::size).sum(); int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.info("[CACHE-MONITOR] L2.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}", log.debug("[CACHE-MONITOR] L2.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}",
start, end, result.size(), totalTracks, cache.estimatedSize()); start, end, result.size(), totalTracks, cache.estimatedSize());
return result; return result;
} }
/**
* 요청된 MMSI 키로 직접 O(1) 조회 mmsi×hourBucket 조합으로 Caffeine getIfPresent() 호출
* 기존 getTracksInRange() 전체 스캔(O(n)) 대비 대폭 성능 개선.
* : 24시간 × 100 MMSI = 2,400회 get() vs 최대 7M 엔트리 스캔
*/
public Map<String, List<VesselTrack>> getTracksForVessels(
LocalDateTime start, LocalDateTime end, Set<String> mmsiKeys) {
if (mmsiKeys == null || mmsiKeys.isEmpty()) {
return Collections.emptyMap();
}
Map<String, List<VesselTrack>> result = new LinkedHashMap<>();
LocalDateTime bucket = start.withMinute(0).withSecond(0).withNano(0);
int lookupCount = 0;
int hitCount = 0;
while (!bucket.isAfter(end) && bucket.isBefore(end)) {
for (String mmsi : mmsiKeys) {
String key = buildKey(mmsi, bucket);
VesselTrack track = cache.getIfPresent(key);
lookupCount++;
if (track != null) {
result.computeIfAbsent(mmsi, k -> new ArrayList<>()).add(track);
hitCount++;
}
}
bucket = bucket.plusHours(1);
}
// MMSI별 시간순 정렬
for (List<VesselTrack> tracks : result.values()) {
tracks.sort(Comparator.comparing(VesselTrack::getTimeBucket));
}
int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.debug("[CACHE-MONITOR] L2.getTracksForVessels [{}, {}): requestedMmsi={}, lookups={}, hits={}, resultMmsi={}, tracks={}",
start, end, mmsiKeys.size(), lookupCount, hitCount, result.size(), totalTracks);
return result;
}
/** /**
* 지정 시간 범위의 캐시 항목 제거 (daily merge 완료 호출) * 지정 시간 범위의 캐시 항목 제거 (daily merge 완료 호출)
*/ */
@ -109,6 +154,74 @@ public class HourlyTrackCache {
start, end, before - after, before, after, getStats()); start, end, before - after, before, after, getStats());
} }
/**
* 6시간 이상 경과한 캐시 엔트리의 WKT LineStringM을 간소화.
* sampleRate번째 포인트만 유지 (/마지막 항상 보존).
* 이미 간소화된 시간 버킷은 스킵하여 중복 간소화 방지.
*
* @param hoursAgo 간소화 대상 경과 시간 ()
* @param sampleRate 샘플링 비율 (2 = 2번째 포인트만 유지 ~50% 감소)
* @return 간소화된 엔트리
*/
public int simplifyOlderThan(int hoursAgo, int sampleRate) {
LocalDateTime threshold = LocalDateTime.now().minusHours(hoursAgo);
int simplified = 0;
int totalOriginal = 0;
int totalAfter = 0;
int skipped = 0;
for (Map.Entry<String, VesselTrack> entry : cache.asMap().entrySet()) {
VesselTrack track = entry.getValue();
if (track.getTimeBucket() == null || !track.getTimeBucket().isBefore(threshold)) {
continue;
}
// 이미 간소화된 시간 버킷이면 스킵
if (simplifiedBuckets.contains(track.getTimeBucket())) {
skipped++;
continue;
}
String wkt = track.getTrackGeom();
if (wkt == null || track.getPointCount() == null || track.getPointCount() <= 3) {
continue;
}
int originalCount = track.getPointCount();
String simplifiedWkt = simplifyLineStringM(wkt, sampleRate);
if (simplifiedWkt != null && !simplifiedWkt.equals(wkt)) {
track.setTrackGeom(simplifiedWkt);
int newCount = countWktPoints(simplifiedWkt);
totalOriginal += originalCount;
totalAfter += newCount;
track.setPointCount(newCount);
simplified++;
}
}
// 간소화 완료된 시간 버킷 기록 (threshold 이전 모든 정각 버킷)
LocalDateTime bucket = threshold.withMinute(0).withSecond(0).withNano(0);
LocalDateTime oldest = LocalDateTime.now().minusHours(ttlHours + 1);
while (!bucket.isBefore(oldest)) {
simplifiedBuckets.add(bucket);
bucket = bucket.minusHours(1);
}
// 만료된 버킷 추적 정리
simplifiedBuckets.removeIf(b -> b.isBefore(oldest));
if (simplified > 0) {
double reduction = totalOriginal > 0 ? (1 - (double) totalAfter / totalOriginal) * 100 : 0;
log.info("[CACHE-SIMPLIFY] L2 간소화: entries={}, skipped={}, points {} -> {} ({}% 감소), threshold={}h",
simplified, skipped, totalOriginal, totalAfter,
String.format("%.1f", reduction), hoursAgo);
} else {
log.debug("[CACHE-SIMPLIFY] L2 간소화 대상 없음: skipped={}, threshold={}h", skipped, hoursAgo);
}
return simplified;
}
public long size() { public long size() {
return cache.estimatedSize(); return cache.estimatedSize();
} }
@ -136,4 +249,48 @@ public class HourlyTrackCache {
private String buildKey(String mmsi, LocalDateTime timeBucket) { private String buildKey(String mmsi, LocalDateTime timeBucket) {
return mmsi + "::" + timeBucket.format(KEY_FORMATTER); return mmsi + "::" + timeBucket.format(KEY_FORMATTER);
} }
/**
* WKT LineStringM에서 sampleRate번째 포인트만 유지.
* 포인트와 마지막 포인트는 항상 보존.
*
* 입력 형식: "LINESTRING M(lon1 lat1 m1,lon2 lat2 m2,...)"
* 또는 "LINESTRINGM(lon1 lat1 m1,lon2 lat2 m2,...)"
*/
static String simplifyLineStringM(String wkt, int sampleRate) {
if (wkt == null || sampleRate <= 1) return wkt;
int openParen = wkt.indexOf('(');
int closeParen = wkt.lastIndexOf(')');
if (openParen < 0 || closeParen < 0 || closeParen <= openParen + 1) return wkt;
String prefix = wkt.substring(0, openParen + 1);
String coords = wkt.substring(openParen + 1, closeParen);
String[] points = coords.split(",");
if (points.length <= 3) return wkt;
StringBuilder sb = new StringBuilder(prefix);
for (int i = 0; i < points.length; i++) {
if (i == 0 || i == points.length - 1 || i % sampleRate == 0) {
if (sb.length() > prefix.length()) {
sb.append(',');
}
sb.append(points[i]);
}
}
sb.append(')');
return sb.toString();
}
static int countWktPoints(String wkt) {
if (wkt == null) return 0;
int openParen = wkt.indexOf('(');
int closeParen = wkt.lastIndexOf(')');
if (openParen < 0 || closeParen < 0 || closeParen <= openParen + 1) return 0;
String coords = wkt.substring(openParen + 1, closeParen);
if (coords.isBlank()) return 0;
return coords.split(",").length;
}
} }

파일 보기

@ -0,0 +1,46 @@
package gc.mda.signal_batch.batch.reader;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
/**
* L2 HourlyTrackCache 간소화 스케줄러
*
* 6시간 이상 경과한 캐시 엔트리의 WKT LineStringM을 Nth-point 샘플링으로 간소화.
* 기본 스케줄: 06:30, 12:30, 18:30 (1일 3회)
*
* 간소화 효과: sampleRate=2 기준 ~50% 포인트 감소 L2 메모리 절약
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.cache.hourly-simplification.enabled", havingValue = "true")
public class HourlyTrackSimplifier {
private final HourlyTrackCache hourlyTrackCache;
@Value("${vessel.batch.cache.hourly-simplification.hours-ago:6}")
private int hoursAgo;
@Value("${vessel.batch.cache.hourly-simplification.sample-rate:2}")
private int sampleRate;
public HourlyTrackSimplifier(HourlyTrackCache hourlyTrackCache) {
this.hourlyTrackCache = hourlyTrackCache;
}
@Scheduled(cron = "${vessel.batch.cache.hourly-simplification.cron:0 30 6,12,18 * * *}")
public void scheduledSimplification() {
log.info("[HourlySimplifier] 스케줄 간소화 시작 — hoursAgo={}, sampleRate={}, cacheSize={}",
hoursAgo, sampleRate, hourlyTrackCache.size());
long start = System.currentTimeMillis();
int simplified = hourlyTrackCache.simplifyOlderThan(hoursAgo, sampleRate);
long elapsed = System.currentTimeMillis() - start;
log.info("[HourlySimplifier] 스케줄 간소화 완료 — simplified={}, elapsed={}ms, cacheSize={}",
simplified, elapsed, hourlyTrackCache.size());
}
}

파일 보기

@ -35,9 +35,10 @@ public class AisTargetCacheWriter implements ItemWriter<AisTargetEntity> {
List<? extends AisTargetEntity> items = chunk.getItems(); List<? extends AisTargetEntity> items = chunk.getItems();
log.debug("AIS Target 캐시 업데이트 시작: {} 건", items.size()); log.debug("AIS Target 캐시 업데이트 시작: {} 건", items.size());
// 1. SignalKindCode 치환 // 1. SignalKindCode 치환 (vesselType + extraInfo + shipName 기반, 캐시 저장 1회만)
items.forEach(item -> { items.forEach(item -> {
SignalKindCode kindCode = SignalKindCode.resolve(item.getVesselType(), item.getExtraInfo()); SignalKindCode kindCode = SignalKindCode.resolve(
item.getVesselType(), item.getExtraInfo(), item.getName());
item.setSignalKindCode(kindCode.getCode()); item.setSignalKindCode(kindCode.getCode());
}); });

파일 보기

@ -25,21 +25,24 @@ public class CompositeTrackWriter implements ItemWriter<AbnormalDetectionResult>
private final AbnormalTrackWriter abnormalTrackWriter; private final AbnormalTrackWriter abnormalTrackWriter;
private final String targetTable; private final String targetTable;
private final HourlyTrackCache hourlyTrackCache; // nullable (daily writer는 미사용) private final HourlyTrackCache hourlyTrackCache; // nullable (daily writer는 미사용)
private final boolean includeAbnormalInTracks;
public CompositeTrackWriter(VesselTrackBulkWriter vesselTrackBulkWriter, public CompositeTrackWriter(VesselTrackBulkWriter vesselTrackBulkWriter,
AbnormalTrackWriter abnormalTrackWriter, AbnormalTrackWriter abnormalTrackWriter,
String targetTable, String targetTable,
HourlyTrackCache hourlyTrackCache) { HourlyTrackCache hourlyTrackCache,
boolean includeAbnormalInTracks) {
this.vesselTrackBulkWriter = vesselTrackBulkWriter; this.vesselTrackBulkWriter = vesselTrackBulkWriter;
this.abnormalTrackWriter = abnormalTrackWriter; this.abnormalTrackWriter = abnormalTrackWriter;
this.targetTable = targetTable; this.targetTable = targetTable;
this.hourlyTrackCache = hourlyTrackCache; this.hourlyTrackCache = hourlyTrackCache;
this.includeAbnormalInTracks = includeAbnormalInTracks;
} }
public CompositeTrackWriter(VesselTrackBulkWriter vesselTrackBulkWriter, public CompositeTrackWriter(VesselTrackBulkWriter vesselTrackBulkWriter,
AbnormalTrackWriter abnormalTrackWriter, AbnormalTrackWriter abnormalTrackWriter,
String targetTable) { String targetTable) {
this(vesselTrackBulkWriter, abnormalTrackWriter, targetTable, null); this(vesselTrackBulkWriter, abnormalTrackWriter, targetTable, null, false);
} }
@BeforeStep @BeforeStep
@ -66,9 +69,11 @@ public class CompositeTrackWriter implements ItemWriter<AbnormalDetectionResult>
abnormalResults.add(result); abnormalResults.add(result);
// 정정된 궤적이 있으면 정상 궤적으로 저장 // 정정된 궤적이 있으면 정상 궤적으로 저장
// null이면 전체 궤적이 비정상이므로 제외 // null이면 전체 궤적이 비정상이므로 제외 (플래그 true면 원본 포함)
if (result.getCorrectedTrack() != null) { if (result.getCorrectedTrack() != null) {
normalTracks.add(result.getCorrectedTrack()); normalTracks.add(result.getCorrectedTrack());
} else if (includeAbnormalInTracks) {
normalTracks.add(result.getOriginalTrack());
} else { } else {
log.debug("비정상 궤적 전체 제외: vessel={}", log.debug("비정상 궤적 전체 제외: vessel={}",
result.getOriginalTrack().getVesselKey()); result.getOriginalTrack().getVesselKey());
@ -86,7 +91,7 @@ public class CompositeTrackWriter implements ItemWriter<AbnormalDetectionResult>
if (hourlyTrackCache != null) { if (hourlyTrackCache != null) {
long l2Before = hourlyTrackCache.size(); long l2Before = hourlyTrackCache.size();
hourlyTrackCache.putAll(normalTracks); hourlyTrackCache.putAll(normalTracks);
log.info("[CACHE-MONITOR] CompositeTrackWriter → L2.putAll: tracks={}, L2 before={}, after={}", log.debug("[CACHE-MONITOR] CompositeTrackWriter → L2.putAll: tracks={}, L2 before={}, after={}",
normalTracks.size(), l2Before, hourlyTrackCache.size()); normalTracks.size(), l2Before, hourlyTrackCache.size());
} }
} else if ("daily".equals(targetTable)) { } else if ("daily".equals(targetTable)) {

파일 보기

@ -6,6 +6,7 @@ import gc.mda.signal_batch.domain.gis.dto.VesselContactRequest;
import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse; import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse;
import gc.mda.signal_batch.domain.gis.service.AreaSearchService; import gc.mda.signal_batch.domain.gis.service.AreaSearchService;
import gc.mda.signal_batch.domain.gis.service.VesselContactService; import gc.mda.signal_batch.domain.gis.service.VesselContactService;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content; import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.ExampleObject; import io.swagger.v3.oas.annotations.media.ExampleObject;
@ -219,4 +220,11 @@ public class AreaSearchController {
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE) return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.body(Map.of("error", e.getMessage())); .body(Map.of("error", e.getMessage()));
} }
@ExceptionHandler(QueryTimeoutException.class)
public ResponseEntity<Map<String, String>> handleQueryTimeout(QueryTimeoutException e) {
log.warn("Area search query timeout: {}", e.getMessage());
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.body(Map.of("error", e.getMessage()));
}
} }

파일 보기

@ -6,8 +6,11 @@ import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest; import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack; import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto; import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailRequest;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailResponse;
import gc.mda.signal_batch.domain.gis.service.GisService; import gc.mda.signal_batch.domain.gis.service.GisService;
import gc.mda.signal_batch.domain.vessel.service.VesselPositionService; import gc.mda.signal_batch.domain.vessel.service.VesselPositionService;
import gc.mda.signal_batch.domain.vessel.service.VesselPositionDetailService;
import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.tags.Tag; import io.swagger.v3.oas.annotations.tags.Tag;
@ -28,6 +31,7 @@ public class GisController {
private final GisService gisService; private final GisService gisService;
private final VesselPositionService vesselPositionService; private final VesselPositionService vesselPositionService;
private final VesselPositionDetailService vesselPositionDetailService;
@GetMapping("/haegu/boundaries") @GetMapping("/haegu/boundaries")
@Operation(summary = "해구 경계 조회", description = "모든 해구의 경계 정보를 GeoJSON 형식으로 반환") @Operation(summary = "해구 경계 조회", description = "모든 해구의 경계 정보를 GeoJSON 형식으로 반환")
@ -97,4 +101,20 @@ public class GisController {
return vesselPositionService.getRecentVesselPositions(minutes); return vesselPositionService.getRecentVesselPositions(minutes);
} }
@PostMapping("/vessels/recent-positions-detail")
@Operation(
summary = "최근 위치 상세 조회 (공간 필터 지원)",
description = "AIS 캐시에서 지정 시간 내 선박의 상세 정보를 공간 필터(폴리곤/원)와 함께 조회합니다. "
+ "coordinates(폴리곤)와 center+radiusNm(원) 중 하나를 지정하거나, 둘 다 생략하면 전체 조회합니다."
)
public List<RecentPositionDetailResponse> getRecentPositionsDetail(
@RequestBody RecentPositionDetailRequest request) {
if (request.getMinutes() <= 0 || request.getMinutes() > 1440) {
throw new IllegalArgumentException("Minutes must be between 1 and 1440");
}
return vesselPositionDetailService.getRecentPositionsDetail(request);
}
} }

파일 보기

@ -18,6 +18,7 @@ import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.responses.ApiResponse; import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.responses.ApiResponses; import io.swagger.v3.oas.annotations.responses.ApiResponses;
import io.swagger.v3.oas.annotations.tags.Tag; import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import lombok.RequiredArgsConstructor; import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.*; import org.springframework.web.bind.annotation.*;
@ -188,8 +189,22 @@ public class GisControllerV2 {
required = true, required = true,
content = @Content(schema = @Schema(implementation = VesselTracksRequest.class)) content = @Content(schema = @Schema(implementation = VesselTracksRequest.class))
) )
@RequestBody VesselTracksRequest request) { @RequestBody VesselTracksRequest request,
return gisServiceV2.getVesselTracksV2(request); HttpServletRequest httpRequest) {
return gisServiceV2.getVesselTracksV2(request, getClientIp(httpRequest), getClientId(httpRequest));
}
private String getClientId(HttpServletRequest request) {
return gc.mda.signal_batch.global.config.WebSocketStompConfig.extractClientIdFromRequest(request);
}
private String getClientIp(HttpServletRequest request) {
String[] headers = {"X-Forwarded-For", "X-Original-Forwarded-For", "X-Real-IP"};
for (String header : headers) {
String ip = request.getHeader(header);
if (ip != null && !ip.isBlank()) return ip.split(",")[0].trim();
}
return request.getRemoteAddr();
} }
@GetMapping("/vessels/recent-positions") @GetMapping("/vessels/recent-positions")

파일 보기

@ -42,6 +42,10 @@ public class AreaSearchRequest {
@Schema(description = "탐색 대상 폴리곤 영역 목록 (1~10개)", requiredMode = Schema.RequiredMode.REQUIRED) @Schema(description = "탐색 대상 폴리곤 영역 목록 (1~10개)", requiredMode = Schema.RequiredMode.REQUIRED)
private List<SearchPolygon> polygons; private List<SearchPolygon> polygons;
@Schema(description = "true 시 중국허가선박(~1,400척)만 분석 대상으로 필터링", example = "false")
@Builder.Default
private boolean chnPrmShipOnly = false;
@Schema(description = "검색 모드 (폴리곤이 2개 이상일 때 적용)") @Schema(description = "검색 모드 (폴리곤이 2개 이상일 때 적용)")
public enum SearchMode { public enum SearchMode {
@Schema(description = "합집합: 어느 한 영역이라도 통과한 선박") @Schema(description = "합집합: 어느 한 영역이라도 통과한 선박")

파일 보기

@ -47,6 +47,10 @@ public class VesselContactRequest {
@Schema(description = "최대 접촉 판정 거리 (미터, 50~5000)", example = "1000", requiredMode = Schema.RequiredMode.REQUIRED) @Schema(description = "최대 접촉 판정 거리 (미터, 50~5000)", example = "1000", requiredMode = Schema.RequiredMode.REQUIRED)
private Double maxContactDistanceMeters; private Double maxContactDistanceMeters;
@Schema(description = "true 시 중국허가선박만 대상으로 접촉 분석", example = "false")
@Builder.Default
private boolean chnPrmShipOnly = false;
@Data @Data
@Builder @Builder
@NoArgsConstructor @NoArgsConstructor

파일 보기

@ -16,10 +16,10 @@ import java.util.List;
@Schema(description = "비정상 접촉 선박 탐색 응답") @Schema(description = "비정상 접촉 선박 탐색 응답")
public class VesselContactResponse { public class VesselContactResponse {
@Schema(description = "접촉 선박 쌍 목록") @Schema(description = "접촉 선박 쌍 목록 — 동일 선박 쌍이 시간 갭(20분 이상)으로 분리된 여러 접촉 세그먼트를 가질 수 있음")
private List<VesselContactPair> contacts; private List<VesselContactPair> contacts;
@Schema(description = "관련 선박의 전체 기간 항적 (CompactVesselTrack)") @Schema(description = "관련 선박의 전체 기간 항적 — 선박당 1건으로 중복 제거됨 (CompactVesselTrack)")
private List<CompactVesselTrack> tracks; private List<CompactVesselTrack> tracks;
@Schema(description = "탐색 요약 정보") @Schema(description = "탐색 요약 정보")

파일 보기

@ -6,9 +6,14 @@ import gc.mda.signal_batch.domain.gis.dto.AreaSearchRequest.SearchPolygon;
import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse; import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse;
import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse.AreaSearchSummary; import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse.AreaSearchSummary;
import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse.PolygonHitDetail; import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse.PolygonHitDetail;
import gc.mda.signal_batch.batch.reader.ChnPrmShipProperties;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack; import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.TrackMemoryEstimator;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager; import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager.DailyTrackData; import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager.DailyTrackData;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import lombok.RequiredArgsConstructor; import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*; import org.locationtech.jts.geom.*;
@ -28,6 +33,9 @@ import java.util.stream.Collectors;
public class AreaSearchService { public class AreaSearchService {
private final DailyTrackCacheManager cacheManager; private final DailyTrackCacheManager cacheManager;
private final ActiveQueryManager activeQueryManager;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final ChnPrmShipProperties chnPrmShipProperties;
private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory(); private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory();
/** /**
@ -45,82 +53,115 @@ public class AreaSearchService {
return buildEmptyResponse(request, startMs); return buildEmptyResponse(request, startMs);
} }
// 3. 다일 데이터 선박별 단일 트랙 병합 // 3. 동시성·메모리 관리 (데이터 로딩 슬롯/예산 확보)
Map<String, CompactVesselTrack> mergedTracks = mergeMultipleDays(targetDates); String queryId = "area-search-" + Long.toHexString(System.nanoTime());
if (mergedTracks.isEmpty()) { boolean slotAcquired = false, memoryReserved = false;
return buildEmptyResponse(request, startMs); try {
if (!activeQueryManager.tryAcquireQuerySlotImmediate(queryId)) {
if (!activeQueryManager.tryAcquireQuerySlot(queryId)) {
throw new QueryTimeoutException("서버 과부하: area-search 슬롯 대기 타임아웃");
}
}
slotAcquired = true;
long estimatedBytes = TrackMemoryEstimator.estimateQueryBytes(targetDates.size(), 2000);
memoryBudgetManager.reserveQueryMemory(queryId, estimatedBytes, 30_000L);
memoryReserved = true;
// 4. 다일 데이터 선박별 단일 트랙 병합
Map<String, CompactVesselTrack> mergedTracks = mergeMultipleDays(targetDates);
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, startMs);
}
// 4-1. ChnPrmShip 필터링
if (request.isChnPrmShipOnly()) {
int totalBefore = mergedTracks.size();
Set<String> chnPrmMmsiSet = chnPrmShipProperties.getMmsiSet();
mergedTracks.entrySet().removeIf(e -> !chnPrmMmsiSet.contains(e.getKey()));
log.debug("ChnPrmShip 필터 적용: {} → {} 선박", totalBefore, mergedTracks.size());
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, startMs);
}
}
// 5. 좌표 JTS Polygon 변환
List<Polygon> jtsPolygons = convertToJtsPolygons(request.getPolygons());
// 6. 병합된 트랙으로 STRtree 빌드
STRtree spatialIndex = buildSpatialIndex(mergedTracks);
// 7. 폴리곤별 히트 선박 + 개별 방문(trip) 수집
List<Map<String, List<PolygonHitDetail>>> perPolygonHits = new ArrayList<>();
for (int i = 0; i < jtsPolygons.size(); i++) {
Polygon polygon = jtsPolygons.get(i);
SearchPolygon searchPolygon = request.getPolygons().get(i);
Map<String, List<PolygonHitDetail>> hits = findHitsForPolygon(
polygon, searchPolygon, mergedTracks, spatialIndex);
perPolygonHits.add(hits);
}
// 8. 모드별 결과 합산
SearchMode mode = request.getPolygons().size() == 1 ? SearchMode.ANY : request.getMode();
Map<String, List<PolygonHitDetail>> resultHits;
switch (mode) {
case ALL:
resultHits = processAllMode(perPolygonHits);
break;
case SEQUENTIAL:
resultHits = processSequentialMode(perPolygonHits);
break;
default:
resultHits = processAnyMode(perPolygonHits);
break;
}
// 9. 결과 선박의 전체 기간 트랙 + 히트 메타 반환
List<CompactVesselTrack> resultTracks = resultHits.keySet().stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long totalPoints = resultHits.values().stream()
.flatMap(Collection::stream)
.mapToLong(h -> h.getHitPointCount() != null ? h.getHitPointCount() : 0)
.sum();
int totalCachedVessels = targetDates.stream()
.mapToInt(d -> {
DailyTrackData data = cacheManager.getDailyTrackData(d);
return data != null ? data.getVesselCount() : 0;
})
.sum();
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Area search completed: mode={}, polygons={}, hitVessels={}, totalPoints={}, chnPrmOnly={}, elapsed={}ms",
mode, request.getPolygons().size(), resultHits.size(), totalPoints, request.isChnPrmShipOnly(), elapsedMs);
return AreaSearchResponse.builder()
.tracks(resultTracks)
.hitDetails(resultHits)
.summary(AreaSearchSummary.builder()
.totalVessels(resultHits.size())
.totalPoints(totalPoints)
.mode(mode)
.polygonIds(request.getPolygons().stream()
.map(SearchPolygon::getId)
.collect(Collectors.toList()))
.processingTimeMs(elapsedMs)
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.totalCachedVessels(totalCachedVessels)
.build())
.build();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new QueryTimeoutException("area-search 슬롯 대기 중 인터럽트");
} finally {
if (memoryReserved) memoryBudgetManager.releaseQueryMemory(queryId);
if (slotAcquired) activeQueryManager.releaseQuerySlot(queryId);
} }
// 4. 좌표 JTS Polygon 변환
List<Polygon> jtsPolygons = convertToJtsPolygons(request.getPolygons());
// 5. 병합된 트랙으로 STRtree 빌드
STRtree spatialIndex = buildSpatialIndex(mergedTracks);
// 6. 폴리곤별 히트 선박 + 개별 방문(trip) 수집
List<Map<String, List<PolygonHitDetail>>> perPolygonHits = new ArrayList<>();
for (int i = 0; i < jtsPolygons.size(); i++) {
Polygon polygon = jtsPolygons.get(i);
SearchPolygon searchPolygon = request.getPolygons().get(i);
Map<String, List<PolygonHitDetail>> hits = findHitsForPolygon(
polygon, searchPolygon, mergedTracks, spatialIndex);
perPolygonHits.add(hits);
}
// 7. 모드별 결과 합산
SearchMode mode = request.getPolygons().size() == 1 ? SearchMode.ANY : request.getMode();
Map<String, List<PolygonHitDetail>> resultHits;
switch (mode) {
case ALL:
resultHits = processAllMode(perPolygonHits);
break;
case SEQUENTIAL:
resultHits = processSequentialMode(perPolygonHits);
break;
default:
resultHits = processAnyMode(perPolygonHits);
break;
}
// 8. 결과 선박의 전체 기간 트랙 + 히트 메타 반환
List<CompactVesselTrack> resultTracks = resultHits.keySet().stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long totalPoints = resultHits.values().stream()
.flatMap(Collection::stream)
.mapToLong(h -> h.getHitPointCount() != null ? h.getHitPointCount() : 0)
.sum();
int totalCachedVessels = targetDates.stream()
.mapToInt(d -> {
DailyTrackData data = cacheManager.getDailyTrackData(d);
return data != null ? data.getVesselCount() : 0;
})
.sum();
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Area search completed: mode={}, polygons={}, hitVessels={}, totalPoints={}, elapsed={}ms",
mode, request.getPolygons().size(), resultHits.size(), totalPoints, elapsedMs);
return AreaSearchResponse.builder()
.tracks(resultTracks)
.hitDetails(resultHits)
.summary(AreaSearchSummary.builder()
.totalVessels(resultHits.size())
.totalPoints(totalPoints)
.mode(mode)
.polygonIds(request.getPolygons().stream()
.map(SearchPolygon::getId)
.collect(Collectors.toList()))
.processingTimeMs(elapsedMs)
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.totalCachedVessels(totalCachedVessels)
.build())
.build();
} }
// 입력 검증 // 입력 검증
@ -244,9 +285,11 @@ public class AreaSearchService {
// 여러 날짜 병합 // 여러 날짜 병합
CompactVesselTrack first = trackList.get(0); CompactVesselTrack first = trackList.get(0);
List<double[]> geo = new ArrayList<>(); int totalPoints = trackList.stream()
List<String> ts = new ArrayList<>(); .mapToInt(t -> t.getPointCount() != null ? t.getPointCount() : 0).sum();
List<Double> sp = new ArrayList<>(); List<double[]> geo = new ArrayList<>(totalPoints);
List<String> ts = new ArrayList<>(totalPoints);
List<Double> sp = new ArrayList<>(totalPoints);
double totalDist = 0; double totalDist = 0;
double maxSpeed = 0; double maxSpeed = 0;
int pointCount = 0; int pointCount = 0;
@ -347,10 +390,13 @@ public class AreaSearchService {
long currentExit = 0; long currentExit = 0;
int currentHitCount = 0; int currentHitCount = 0;
int visitIndex = 0; int visitIndex = 0;
Coordinate reusable = new Coordinate();
for (int i = 0; i < geometry.size(); i++) { for (int i = 0; i < geometry.size(); i++) {
double[] coord = geometry.get(i); double[] coord = geometry.get(i);
Point point = GEOMETRY_FACTORY.createPoint(new Coordinate(coord[0], coord[1])); reusable.x = coord[0];
reusable.y = coord[1];
Point point = GEOMETRY_FACTORY.createPoint(reusable);
boolean isInside = prepared.contains(point); boolean isInside = prepared.contains(point);
if (isInside) { if (isInside) {
@ -438,6 +484,7 @@ public class AreaSearchService {
try { try {
return Long.parseLong(timestamps.get(index)); return Long.parseLong(timestamps.get(index));
} catch (NumberFormatException e) { } catch (NumberFormatException e) {
log.warn("Invalid timestamp at index {}: {}", index, timestamps.get(index));
return 0L; return 0L;
} }
} }

파일 보기

@ -5,7 +5,6 @@ import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselStatsResponse; import gc.mda.signal_batch.domain.vessel.dto.VesselStatsResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest; import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack; import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.global.util.SignalKindCode;
import gc.mda.signal_batch.global.util.TrackSimplificationUtils; import gc.mda.signal_batch.global.util.TrackSimplificationUtils;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.beans.factory.annotation.Qualifier;
@ -604,9 +603,11 @@ public class GisService {
Map<String, String> vesselInfo = getVesselInfo(mmsi); Map<String, String> vesselInfo = getVesselInfo(mmsi);
String shipName = vesselInfo.get("ship_name"); String shipName = vesselInfo.get("ship_name");
String shipType = vesselInfo.get("ship_type"); String shipType = vesselInfo.get("ship_type");
String signalKindCode = vesselInfo.get("signal_kind_code");
String nationalCode = (mmsi != null && mmsi.length() >= 3) ? mmsi.substring(0, 3) : null; String nationalCode = (mmsi != null && mmsi.length() >= 3) ? mmsi.substring(0, 3) : null;
String shipKindCode = SignalKindCode.resolve(shipType, null).getCode(); String shipKindCode = (signalKindCode != null && !signalKindCode.isEmpty())
? signalKindCode : "000027";
return CompactVesselTrack.builder() return CompactVesselTrack.builder()
.vesselId(mmsi) .vesselId(mmsi)
@ -628,7 +629,7 @@ public class GisService {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource); JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
try { try {
String sql = """ String sql = """
SELECT ship_nm as ship_name, vessel_type as ship_type SELECT ship_nm as ship_name, vessel_type as ship_type, signal_kind_code
FROM signal.t_ais_position FROM signal.t_ais_position
WHERE mmsi = ? WHERE mmsi = ?
LIMIT 1 LIMIT 1

파일 보기

@ -9,12 +9,17 @@ import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import gc.mda.signal_batch.domain.vessel.dto.TrackResponse; import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest; import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.model.VesselTrack; import gc.mda.signal_batch.domain.vessel.model.VesselTrack;
import gc.mda.signal_batch.global.exception.MemoryBudgetExceededException;
import gc.mda.signal_batch.global.exception.QueryTimeoutException; import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.TrackConverter; import gc.mda.signal_batch.global.util.TrackConverter;
import gc.mda.signal_batch.global.util.TrackMemoryEstimator;
import gc.mda.signal_batch.global.util.VesselTrackToCompactConverter; import gc.mda.signal_batch.global.util.VesselTrackToCompactConverter;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager; import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.CacheTrackSimplifier; import gc.mda.signal_batch.global.websocket.service.CacheTrackSimplifier;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager; import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import gc.mda.signal_batch.monitoring.service.QueryMetricsBufferService;
import gc.mda.signal_batch.monitoring.service.QueryMetricsService;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value; import org.springframework.beans.factory.annotation.Value;
@ -52,6 +57,8 @@ public class GisServiceV2 {
private final VesselTrackToCompactConverter vesselTrackToCompactConverter; private final VesselTrackToCompactConverter vesselTrackToCompactConverter;
private final ChnPrmShipCacheManager chnPrmShipCacheManager; private final ChnPrmShipCacheManager chnPrmShipCacheManager;
private final ChnPrmShipProperties chnPrmShipProperties; private final ChnPrmShipProperties chnPrmShipProperties;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final QueryMetricsBufferService queryMetricsBufferService;
@Value("${rest.v2.query.timeout-seconds:30}") @Value("${rest.v2.query.timeout-seconds:30}")
private int restQueryTimeout; private int restQueryTimeout;
@ -72,7 +79,9 @@ public class GisServiceV2 {
FiveMinTrackCache fiveMinTrackCache, FiveMinTrackCache fiveMinTrackCache,
VesselTrackToCompactConverter vesselTrackToCompactConverter, VesselTrackToCompactConverter vesselTrackToCompactConverter,
ChnPrmShipCacheManager chnPrmShipCacheManager, ChnPrmShipCacheManager chnPrmShipCacheManager,
ChnPrmShipProperties chnPrmShipProperties) { ChnPrmShipProperties chnPrmShipProperties,
TrackMemoryBudgetManager memoryBudgetManager,
QueryMetricsBufferService queryMetricsBufferService) {
this.queryDataSource = queryDataSource; this.queryDataSource = queryDataSource;
this.activeQueryManager = activeQueryManager; this.activeQueryManager = activeQueryManager;
this.dailyTrackCacheManager = dailyTrackCacheManager; this.dailyTrackCacheManager = dailyTrackCacheManager;
@ -83,6 +92,8 @@ public class GisServiceV2 {
this.vesselTrackToCompactConverter = vesselTrackToCompactConverter; this.vesselTrackToCompactConverter = vesselTrackToCompactConverter;
this.chnPrmShipCacheManager = chnPrmShipCacheManager; this.chnPrmShipCacheManager = chnPrmShipCacheManager;
this.chnPrmShipProperties = chnPrmShipProperties; this.chnPrmShipProperties = chnPrmShipProperties;
this.memoryBudgetManager = memoryBudgetManager;
this.queryMetricsBufferService = queryMetricsBufferService;
} }
/** /**
@ -274,13 +285,28 @@ public class GisServiceV2 {
/** /**
* 선박별 항적 조회 V2 (캐시 + Semaphore + 간소화 + ChnPrmShip enrichment) * 선박별 항적 조회 V2 (캐시 + Semaphore + 간소화 + ChnPrmShip enrichment)
*/ */
public List<CompactVesselTrack> getVesselTracksV2(VesselTracksRequest request) { public List<CompactVesselTrack> getVesselTracksV2(VesselTracksRequest request, String clientIp, String clientId) {
String queryId = "rest-vessels-" + UUID.randomUUID().toString().substring(0, 8); String queryId = "rest-vessels-" + UUID.randomUUID().toString().substring(0, 8);
long startMs = System.currentTimeMillis();
boolean slotAcquired = false; boolean slotAcquired = false;
boolean memoryReserved = false;
try { try {
slotAcquired = acquireSlotWithWait(queryId); slotAcquired = acquireSlotWithWait(queryId);
// 쿼리 메모리 사전 예약
int days = (int) java.time.Duration.between(request.getStartTime(), request.getEndTime()).toDays() + 1;
long estimatedBytes = TrackMemoryEstimator.estimateQueryBytes(days, request.getVessels().size());
try {
memoryBudgetManager.reserveQueryMemory(queryId, estimatedBytes,
memoryBudgetManager.getProperties().getQueueTimeoutSeconds() * 1000L);
memoryReserved = true;
} catch (MemoryBudgetExceededException e) {
log.warn("[MemoryBudget] REST 쿼리 메모리 예약 실패: queryId={}, estimated={}MB — {}",
queryId, estimatedBytes / (1024 * 1024), e.getMessage());
throw e;
}
List<CompactVesselTrack> result; List<CompactVesselTrack> result;
if (dailyTrackCacheManager.isEnabled() && if (dailyTrackCacheManager.isEnabled() &&
@ -303,9 +329,14 @@ public class GisServiceV2 {
result.size(), request.getVessels().size(), result.size(), request.getVessels().size(),
dailyTrackCacheManager.isEnabled(), request.isIncludeChnPrmShip()); dailyTrackCacheManager.isEnabled(), request.isIncludeChnPrmShip());
enqueueRestMetric(queryId, request, result, startMs, clientIp, clientId);
return result; return result;
} finally { } finally {
if (memoryReserved) {
memoryBudgetManager.releaseQueryMemory(queryId);
}
if (slotAcquired) { if (slotAcquired) {
activeQueryManager.releaseQuerySlot(queryId); activeQueryManager.releaseQuerySlot(queryId);
if (activeQueryManager.isHeapPressureHigh()) { if (activeQueryManager.isHeapPressureHigh()) {
@ -315,6 +346,34 @@ public class GisServiceV2 {
} }
} }
private void enqueueRestMetric(String queryId, VesselTracksRequest request,
List<CompactVesselTrack> result, long startMs, String clientIp, String clientId) {
try {
int totalPoints = result.stream().mapToInt(CompactVesselTrack::getPointCount).sum();
long responseBytes = (long) result.size() * 200 + (long) totalPoints * 40;
queryMetricsBufferService.enqueue(QueryMetricsService.QueryMetric.builder()
.queryId(queryId)
.queryType("REST_V2")
.startTime(request.getStartTime())
.endTime(request.getEndTime())
.requestedMmsi(request.getVessels().size())
.dataPath(dailyTrackCacheManager.isEnabled() ? "HYBRID" : "DB")
.uniqueVessels(result.size())
.totalTracks(result.size())
.totalPoints(totalPoints)
.pointsAfterSimplify(totalPoints)
.totalChunks(1)
.responseBytes(responseBytes)
.elapsedMs(System.currentTimeMillis() - startMs)
.status("COMPLETED")
.clientIp(clientIp)
.clientId(clientId)
.build());
} catch (Exception e) {
log.debug("Failed to enqueue REST metric: {}", e.getMessage());
}
}
// 캐시 조회 로직 // 캐시 조회 로직
private List<CompactVesselTrack> queryWithCache(VesselTracksRequest request) { private List<CompactVesselTrack> queryWithCache(VesselTracksRequest request) {
@ -328,24 +387,16 @@ public class GisServiceV2 {
Set<String> requestedMmsis = new HashSet<>(request.getVessels()); Set<String> requestedMmsis = new HashSet<>(request.getVessels());
// 1. 캐시에서 조회 (캐시된 날짜) + 누락 MMSI 부분 DB fallback // 1. L3 캐시에서 요청 MMSI만 O(1) 직접 조회 + 누락 MMSI 부분 DB fallback
if (split.hasCachedData()) { if (split.hasCachedData()) {
List<CompactVesselTrack> cachedTracks = List<CompactVesselTrack> filteredCached =
dailyTrackCacheManager.getCachedTracksMultipleDays(split.getCachedDates()); dailyTrackCacheManager.getCachedTracksForVessels(split.getCachedDates(), requestedMmsis);
int totalCachedCount = cachedTracks.size();
List<CompactVesselTrack> filteredCached = cachedTracks.stream()
.filter(t -> requestedMmsis.contains(t.getVesselId()))
.map(t -> t.toBuilder().build())
.collect(Collectors.toList());
cachedTracks.clear();
allTracks.addAll(filteredCached); allTracks.addAll(filteredCached);
log.debug("[CacheQuery] cached {} days -> {} tracks (filtered from {})", log.debug("[CacheQuery] cached {} days -> {} tracks (key-based lookup, {} MMSI requested)",
split.getCachedDates().size(), filteredCached.size(), totalCachedCount); split.getCachedDates().size(), filteredCached.size(), requestedMmsis.size());
// Daily 캐시에 없는 MMSI DB fallback (hourly/5min 계층 조회) // Daily 캐시에 없는 MMSI DB fallback
Set<String> cachedMmsis = filteredCached.stream() Set<String> cachedMmsis = filteredCached.stream()
.map(CompactVesselTrack::getVesselId) .map(CompactVesselTrack::getVesselId)
.collect(Collectors.toSet()); .collect(Collectors.toSet());
@ -383,23 +434,22 @@ public class GisServiceV2 {
} }
} }
// 3-a. hourly 범위 L2 캐시 DB fallback (누락 MMSI 부분 fallback 포함) // 3-a. hourly 범위 L2 캐시 O(1) 기반 조회 DB fallback (누락 MMSI)
if (split.hasHourlyRange()) { if (split.hasHourlyRange()) {
DailyTrackCacheManager.DateRange hr = split.getHourlyRange(); DailyTrackCacheManager.DateRange hr = split.getHourlyRange();
Map<String, List<VesselTrack>> hourlyTracks = Map<String, List<VesselTrack>> hourlyTracks =
hourlyTrackCache.getTracksInRange(hr.getStart(), hr.getEnd()); hourlyTrackCache.getTracksForVessels(hr.getStart(), hr.getEnd(), requestedMmsis);
if (!hourlyTracks.isEmpty()) { if (!hourlyTracks.isEmpty()) {
Map<String, List<VesselTrack>> filtered = filterByMmsi(hourlyTracks, requestedMmsis); List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(hourlyTracks);
List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(filtered);
allTracks.addAll(converted); allTracks.addAll(converted);
int totalPts = converted.stream().mapToInt(CompactVesselTrack::getPointCount).sum(); int totalPts = converted.stream().mapToInt(CompactVesselTrack::getPointCount).sum();
log.info("[CACHE-MONITOR] queryWithCache L2 HIT [{}, {}): cacheVessels={}, filteredVessels={}, compactTracks={}, points={}", log.info("[CACHE-MONITOR] queryWithCache L2 HIT [{}, {}): resultVessels={}, compactTracks={}, points={}",
hr.getStart(), hr.getEnd(), hourlyTracks.size(), filtered.size(), converted.size(), totalPts); hr.getStart(), hr.getEnd(), hourlyTracks.size(), converted.size(), totalPts);
// 캐시에 없는 MMSI DB fallback // 캐시에 없는 MMSI DB fallback
Set<String> missingMmsis = new HashSet<>(requestedMmsis); Set<String> missingMmsis = new HashSet<>(requestedMmsis);
missingMmsis.removeAll(filtered.keySet()); missingMmsis.removeAll(hourlyTracks.keySet());
if (!missingMmsis.isEmpty()) { if (!missingMmsis.isEmpty()) {
VesselTracksRequest fallbackReq = VesselTracksRequest.builder() VesselTracksRequest fallbackReq = VesselTracksRequest.builder()
.startTime(hr.getStart()).endTime(hr.getEnd()) .startTime(hr.getStart()).endTime(hr.getEnd())
@ -407,7 +457,7 @@ public class GisServiceV2 {
List<CompactVesselTrack> dbResult = gisService.getVesselTracks(fallbackReq); List<CompactVesselTrack> dbResult = gisService.getVesselTracks(fallbackReq);
allTracks.addAll(dbResult); allTracks.addAll(dbResult);
log.info("[CACHE-MONITOR] queryWithCache L2 PARTIAL → DB fallback: cacheHit={}, cacheMiss={}, dbTracks={}", log.info("[CACHE-MONITOR] queryWithCache L2 PARTIAL → DB fallback: cacheHit={}, cacheMiss={}, dbTracks={}",
filtered.size(), missingMmsis.size(), dbResult.size()); hourlyTracks.size(), missingMmsis.size(), dbResult.size());
} }
} else { } else {
VesselTracksRequest hourlyReq = VesselTracksRequest.builder() VesselTracksRequest hourlyReq = VesselTracksRequest.builder()
@ -420,23 +470,22 @@ public class GisServiceV2 {
} }
} }
// 3-b. 5min 범위 L1 캐시 DB fallback (누락 MMSI 부분 fallback 포함) // 3-b. 5min 범위 L1 캐시 O(1) 기반 조회 DB fallback (누락 MMSI)
if (split.hasFiveMinRange()) { if (split.hasFiveMinRange()) {
DailyTrackCacheManager.DateRange fr = split.getFiveMinRange(); DailyTrackCacheManager.DateRange fr = split.getFiveMinRange();
Map<String, List<VesselTrack>> fiveMinTracks = Map<String, List<VesselTrack>> fiveMinTracks =
fiveMinTrackCache.getTracksInRange(fr.getStart(), fr.getEnd()); fiveMinTrackCache.getTracksForVessels(fr.getStart(), fr.getEnd(), requestedMmsis);
if (!fiveMinTracks.isEmpty()) { if (!fiveMinTracks.isEmpty()) {
Map<String, List<VesselTrack>> filtered = filterByMmsi(fiveMinTracks, requestedMmsis); List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(fiveMinTracks);
List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(filtered);
allTracks.addAll(converted); allTracks.addAll(converted);
int totalPts = converted.stream().mapToInt(CompactVesselTrack::getPointCount).sum(); int totalPts = converted.stream().mapToInt(CompactVesselTrack::getPointCount).sum();
log.info("[CACHE-MONITOR] queryWithCache L1 HIT [{}, {}): cacheVessels={}, filteredVessels={}, compactTracks={}, points={}", log.info("[CACHE-MONITOR] queryWithCache L1 HIT [{}, {}): resultVessels={}, compactTracks={}, points={}",
fr.getStart(), fr.getEnd(), fiveMinTracks.size(), filtered.size(), converted.size(), totalPts); fr.getStart(), fr.getEnd(), fiveMinTracks.size(), converted.size(), totalPts);
// 캐시에 없는 MMSI DB fallback // 캐시에 없는 MMSI DB fallback
Set<String> missingMmsis = new HashSet<>(requestedMmsis); Set<String> missingMmsis = new HashSet<>(requestedMmsis);
missingMmsis.removeAll(filtered.keySet()); missingMmsis.removeAll(fiveMinTracks.keySet());
if (!missingMmsis.isEmpty()) { if (!missingMmsis.isEmpty()) {
VesselTracksRequest fallbackReq = VesselTracksRequest.builder() VesselTracksRequest fallbackReq = VesselTracksRequest.builder()
.startTime(fr.getStart()).endTime(fr.getEnd()) .startTime(fr.getStart()).endTime(fr.getEnd())
@ -444,7 +493,7 @@ public class GisServiceV2 {
List<CompactVesselTrack> dbResult = gisService.getVesselTracks(fallbackReq); List<CompactVesselTrack> dbResult = gisService.getVesselTracks(fallbackReq);
allTracks.addAll(dbResult); allTracks.addAll(dbResult);
log.info("[CACHE-MONITOR] queryWithCache L1 PARTIAL → DB fallback: cacheHit={}, cacheMiss={}, dbTracks={}", log.info("[CACHE-MONITOR] queryWithCache L1 PARTIAL → DB fallback: cacheHit={}, cacheMiss={}, dbTracks={}",
filtered.size(), missingMmsis.size(), dbResult.size()); fiveMinTracks.size(), missingMmsis.size(), dbResult.size());
} }
} else { } else {
VesselTracksRequest fiveMinReq = VesselTracksRequest.builder() VesselTracksRequest fiveMinReq = VesselTracksRequest.builder()
@ -459,7 +508,6 @@ public class GisServiceV2 {
// 4. 동일 선박 병합 (캐시 + DB 결과) // 4. 동일 선박 병합 (캐시 + DB 결과)
List<CompactVesselTrack> merged = mergeTracksByVessel(allTracks); List<CompactVesselTrack> merged = mergeTracksByVessel(allTracks);
allTracks.clear();
return merged; return merged;
} }

파일 보기

@ -1,10 +1,15 @@
package gc.mda.signal_batch.domain.gis.service; package gc.mda.signal_batch.domain.gis.service;
import gc.mda.signal_batch.batch.reader.ChnPrmShipProperties;
import gc.mda.signal_batch.domain.gis.dto.VesselContactRequest; import gc.mda.signal_batch.domain.gis.dto.VesselContactRequest;
import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse; import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse;
import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse.*; import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse.*;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack; import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.TrackMemoryEstimator;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager; import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import lombok.RequiredArgsConstructor; import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*; import org.locationtech.jts.geom.*;
@ -24,6 +29,9 @@ public class VesselContactService {
private final AreaSearchService areaSearchService; private final AreaSearchService areaSearchService;
private final DailyTrackCacheManager cacheManager; private final DailyTrackCacheManager cacheManager;
private final ActiveQueryManager activeQueryManager;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final ChnPrmShipProperties chnPrmShipProperties;
private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory(); private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory();
private static final double EARTH_RADIUS_M = 6_371_000.0; private static final double EARTH_RADIUS_M = 6_371_000.0;
@ -49,103 +57,133 @@ public class VesselContactService {
return buildEmptyResponse(request, targetDates, startMs); return buildEmptyResponse(request, targetDates, startMs);
} }
Map<String, CompactVesselTrack> mergedTracks = areaSearchService.mergeMultipleDays(targetDates); // 3. 동시성·메모리 관리
if (mergedTracks.isEmpty()) { String queryId = "contact-search-" + Long.toHexString(System.nanoTime());
return buildEmptyResponse(request, targetDates, startMs); boolean slotAcquired = false, memoryReserved = false;
} try {
if (!activeQueryManager.tryAcquireQuerySlotImmediate(queryId)) {
// 3. 병합된 트랙을 직접 사용 (단일 수집원이므로 필터 불필요) if (!activeQueryManager.tryAcquireQuerySlot(queryId)) {
Map<String, CompactVesselTrack> filtered = mergedTracks; throw new QueryTimeoutException("서버 과부하: contact-search 슬롯 대기 타임아웃");
// 4. JTS Polygon + PreparedGeometry
VesselContactRequest.SearchPolygon poly = request.getPolygon();
Polygon jtsPolygon = areaSearchService.toJtsPolygon(poly.getCoordinates());
PreparedGeometry prepared = PreparedGeometryFactory.prepare(jtsPolygon);
// 5. STRtree 후보 필터링 + 폴리곤 내부 포인트 수집
STRtree spatialIndex = areaSearchService.buildSpatialIndex(filtered);
Envelope mbr = jtsPolygon.getEnvelopeInternal();
@SuppressWarnings("unchecked")
List<String> candidates = spatialIndex.query(mbr);
long minDurationSec = request.getMinContactDurationMinutes() * 60L;
double maxDistanceMeters = request.getMaxContactDistanceMeters();
Map<String, List<InsidePosition>> insidePositions = new HashMap<>();
for (String vesselId : candidates) {
CompactVesselTrack track = filtered.get(vesselId);
if (track == null || track.getGeometry() == null) continue;
List<InsidePosition> inside = collectInsidePositions(track, prepared);
if (!inside.isEmpty()) {
insidePositions.put(vesselId, inside);
}
}
int totalVesselsInPolygon = insidePositions.size();
log.info("Vessel contact: filtered={}, insidePolygon={}, dates={}",
filtered.size(), totalVesselsInPolygon, targetDates.size());
// 6. 시간 범위 겹침 사전 필터 + 선박 쌍별 접촉 판정
List<String> vesselIds = new ArrayList<>(insidePositions.keySet());
List<VesselContactPair> contactPairs = new ArrayList<>();
Set<String> involvedVessels = new HashSet<>();
for (int i = 0; i < vesselIds.size(); i++) {
String idA = vesselIds.get(i);
List<InsidePosition> posA = insidePositions.get(idA);
long minTsA = posA.get(0).timestamp;
long maxTsA = posA.get(posA.size() - 1).timestamp;
for (int j = i + 1; j < vesselIds.size(); j++) {
String idB = vesselIds.get(j);
List<InsidePosition> posB = insidePositions.get(idB);
long minTsB = posB.get(0).timestamp;
long maxTsB = posB.get(posB.size() - 1).timestamp;
// 시간 겹침 사전 필터 (minContactDuration 반영)
long overlap = Math.min(maxTsA, maxTsB) - Math.max(minTsA, minTsB);
if (overlap < minDurationSec) continue;
// Two-pointer 접촉 판정
List<VesselContactPair> pairs = detectContacts(
idA, posA, idB, posB,
filtered.get(idA), filtered.get(idB),
minDurationSec, maxDistanceMeters);
if (!pairs.isEmpty()) {
contactPairs.addAll(pairs);
involvedVessels.add(idA);
involvedVessels.add(idB);
} }
} }
slotAcquired = true;
long estimatedBytes = TrackMemoryEstimator.estimateQueryBytes(targetDates.size(), 2000);
memoryBudgetManager.reserveQueryMemory(queryId, estimatedBytes, 30_000L);
memoryReserved = true;
Map<String, CompactVesselTrack> mergedTracks = areaSearchService.mergeMultipleDays(targetDates);
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, targetDates, startMs);
}
// 3-1. ChnPrmShip 필터링
if (request.isChnPrmShipOnly()) {
int totalBefore = mergedTracks.size();
Set<String> chnPrmMmsiSet = chnPrmShipProperties.getMmsiSet();
mergedTracks.entrySet().removeIf(e -> !chnPrmMmsiSet.contains(e.getKey()));
log.debug("ChnPrmShip 필터 적용: {} → {} 선박", totalBefore, mergedTracks.size());
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, targetDates, startMs);
}
}
// 4. JTS Polygon + PreparedGeometry
VesselContactRequest.SearchPolygon poly = request.getPolygon();
Polygon jtsPolygon = areaSearchService.toJtsPolygon(poly.getCoordinates());
PreparedGeometry prepared = PreparedGeometryFactory.prepare(jtsPolygon);
// 5. STRtree 후보 필터링 + 폴리곤 내부 포인트 수집
STRtree spatialIndex = areaSearchService.buildSpatialIndex(mergedTracks);
Envelope mbr = jtsPolygon.getEnvelopeInternal();
@SuppressWarnings("unchecked")
List<String> candidates = spatialIndex.query(mbr);
long minDurationSec = request.getMinContactDurationMinutes() * 60L;
double maxDistanceMeters = request.getMaxContactDistanceMeters();
Map<String, List<InsidePosition>> insidePositions = new HashMap<>();
for (String vesselId : candidates) {
CompactVesselTrack track = mergedTracks.get(vesselId);
if (track == null || track.getGeometry() == null) continue;
List<InsidePosition> inside = collectInsidePositions(track, prepared);
if (!inside.isEmpty()) {
insidePositions.put(vesselId, inside);
}
}
int totalVesselsInPolygon = insidePositions.size();
log.info("Vessel contact: merged={}, insidePolygon={}, chnPrmOnly={}, dates={}",
mergedTracks.size(), totalVesselsInPolygon, request.isChnPrmShipOnly(), targetDates.size());
// 6. 시간 범위 겹침 사전 필터 + 선박 쌍별 접촉 판정
List<String> vesselIds = new ArrayList<>(insidePositions.keySet());
List<VesselContactPair> contactPairs = new ArrayList<>();
Set<String> involvedVessels = new HashSet<>();
for (int i = 0; i < vesselIds.size(); i++) {
String idA = vesselIds.get(i);
List<InsidePosition> posA = insidePositions.get(idA);
long minTsA = posA.get(0).timestamp;
long maxTsA = posA.get(posA.size() - 1).timestamp;
for (int j = i + 1; j < vesselIds.size(); j++) {
String idB = vesselIds.get(j);
List<InsidePosition> posB = insidePositions.get(idB);
long minTsB = posB.get(0).timestamp;
long maxTsB = posB.get(posB.size() - 1).timestamp;
// 시간 겹침 사전 필터 (minContactDuration 반영)
long overlap = Math.min(maxTsA, maxTsB) - Math.max(minTsA, minTsB);
if (overlap < minDurationSec) continue;
// Two-pointer 접촉 판정
List<VesselContactPair> pairs = detectContacts(
idA, posA, idB, posB,
mergedTracks.get(idA), mergedTracks.get(idB),
minDurationSec, maxDistanceMeters);
if (!pairs.isEmpty()) {
contactPairs.addAll(pairs);
involvedVessels.add(idA);
involvedVessels.add(idB);
}
}
}
// 7. 관련 선박 트랙 수집
List<CompactVesselTrack> resultTracks = involvedVessels.stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Vessel contact completed: pairs={}, vessels={}, elapsed={}ms",
contactPairs.size(), involvedVessels.size(), elapsedMs);
return VesselContactResponse.builder()
.contacts(contactPairs)
.tracks(resultTracks)
.summary(VesselContactSummary.builder()
.totalContactPairs(contactPairs.size())
.totalVesselsInvolved(involvedVessels.size())
.totalVesselsInPolygon(totalVesselsInPolygon)
.processingTimeMs(elapsedMs)
.polygonId(poly.getId())
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.build())
.build();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new QueryTimeoutException("contact-search 슬롯 대기 중 인터럽트");
} finally {
if (memoryReserved) memoryBudgetManager.releaseQueryMemory(queryId);
if (slotAcquired) activeQueryManager.releaseQuerySlot(queryId);
} }
// 7. 관련 선박 트랙 수집
List<CompactVesselTrack> resultTracks = involvedVessels.stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Vessel contact completed: pairs={}, vessels={}, elapsed={}ms",
contactPairs.size(), involvedVessels.size(), elapsedMs);
return VesselContactResponse.builder()
.contacts(contactPairs)
.tracks(resultTracks)
.summary(VesselContactSummary.builder()
.totalContactPairs(contactPairs.size())
.totalVesselsInvolved(involvedVessels.size())
.totalVesselsInPolygon(totalVesselsInPolygon)
.processingTimeMs(elapsedMs)
.polygonId(poly.getId())
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.build())
.build();
} }
// 입력 검증 // 입력 검증
@ -173,10 +211,13 @@ public class VesselContactService {
List<double[]> geometry = track.getGeometry(); List<double[]> geometry = track.getGeometry();
List<String> timestamps = track.getTimestamps(); List<String> timestamps = track.getTimestamps();
List<InsidePosition> inside = new ArrayList<>(); List<InsidePosition> inside = new ArrayList<>();
Coordinate reusable = new Coordinate();
for (int i = 0; i < geometry.size(); i++) { for (int i = 0; i < geometry.size(); i++) {
double[] coord = geometry.get(i); double[] coord = geometry.get(i);
Point point = GEOMETRY_FACTORY.createPoint(new Coordinate(coord[0], coord[1])); reusable.x = coord[0];
reusable.y = coord[1];
Point point = GEOMETRY_FACTORY.createPoint(reusable);
if (prepared.contains(point)) { if (prepared.contains(point)) {
long ts = parseTimestamp(timestamps, i); long ts = parseTimestamp(timestamps, i);
inside.add(new InsidePosition(ts, coord[0], coord[1])); inside.add(new InsidePosition(ts, coord[0], coord[1]));
@ -232,7 +273,7 @@ public class VesselContactService {
long diff = Math.abs(a.timestamp - b.timestamp); long diff = Math.abs(a.timestamp - b.timestamp);
if (diff <= SYNC_TOLERANCE_SEC) { if (diff <= SYNC_TOLERANCE_SEC) {
double dist = haversineMeters(a.lat, a.lon, b.lat, b.lon); double dist = equirectangularMeters(a.lat, a.lon, b.lat, b.lon);
long ts = Math.min(a.timestamp, b.timestamp) + diff / 2; // 중간 시각 long ts = Math.min(a.timestamp, b.timestamp) + diff / 2; // 중간 시각
matched.add(new MatchedPoint(ts, dist, a, b)); matched.add(new MatchedPoint(ts, dist, a, b));
pA++; pA++;
@ -278,13 +319,19 @@ public class VesselContactService {
long contactEnd = segment.get(segment.size() - 1).timestamp; long contactEnd = segment.get(segment.size() - 1).timestamp;
long durationMin = (contactEnd - contactStart) / 60; long durationMin = (contactEnd - contactStart) / 60;
DoubleSummaryStatistics distStats = segment.stream() // 단일 루프로 거리/중심점 동시 계산
.mapToDouble(p -> p.distanceMeters) double minDist = Double.MAX_VALUE, maxDist = 0, sumDist = 0;
.summaryStatistics(); double sumCenterLon = 0, sumCenterLat = 0;
for (MatchedPoint p : segment) {
// 접촉 중심점 계산 if (p.distanceMeters < minDist) minDist = p.distanceMeters;
double centerLon = segment.stream().mapToDouble(p -> (p.posA.lon + p.posB.lon) / 2).average().orElse(0); if (p.distanceMeters > maxDist) maxDist = p.distanceMeters;
double centerLat = segment.stream().mapToDouble(p -> (p.posA.lat + p.posB.lat) / 2).average().orElse(0); sumDist += p.distanceMeters;
sumCenterLon += (p.posA.lon + p.posB.lon) / 2;
sumCenterLat += (p.posA.lat + p.posB.lat) / 2;
}
double avgDist = sumDist / segment.size();
double centerLon = sumCenterLon / segment.size();
double centerLat = sumCenterLat / segment.size();
// 선박의 접촉 구간 inside 포인트로 추정 속도 계산 // 선박의 접촉 구간 inside 포인트로 추정 속도 계산
double speedA = estimateAvgSpeed(insidePosA, contactStart, contactEnd); double speedA = estimateAvgSpeed(insidePosA, contactStart, contactEnd);
@ -299,9 +346,9 @@ public class VesselContactService {
.contactStartTimestamp(contactStart) .contactStartTimestamp(contactStart)
.contactEndTimestamp(contactEnd) .contactEndTimestamp(contactEnd)
.contactDurationMinutes(durationMin) .contactDurationMinutes(durationMin)
.minDistanceMeters(Math.round(distStats.getMin() * 10.0) / 10.0) .minDistanceMeters(Math.round(minDist * 10.0) / 10.0)
.avgDistanceMeters(Math.round(distStats.getAverage() * 10.0) / 10.0) .avgDistanceMeters(Math.round(avgDist * 10.0) / 10.0)
.maxDistanceMeters(Math.round(distStats.getMax() * 10.0) / 10.0) .maxDistanceMeters(Math.round(maxDist * 10.0) / 10.0)
.contactCenterPoint(new double[]{ .contactCenterPoint(new double[]{
Math.round(centerLon * 1_000_000.0) / 1_000_000.0, Math.round(centerLon * 1_000_000.0) / 1_000_000.0,
Math.round(centerLat * 1_000_000.0) / 1_000_000.0}) Math.round(centerLat * 1_000_000.0) / 1_000_000.0})
@ -360,27 +407,15 @@ public class VesselContactService {
* 접촉 구간이 22:00~06:00 KST에 포함되는지 판단. * 접촉 구간이 22:00~06:00 KST에 포함되는지 판단.
*/ */
private boolean isNightTimeContact(long contactStartSec, long contactEndSec) { private boolean isNightTimeContact(long contactStartSec, long contactEndSec) {
Instant startInstant = Instant.ofEpochSecond(contactStartSec); ZonedDateTime startKst = Instant.ofEpochSecond(contactStartSec).atZone(KST);
Instant endInstant = Instant.ofEpochSecond(contactEndSec); ZonedDateTime endKst = Instant.ofEpochSecond(contactEndSec).atZone(KST);
ZonedDateTime startKst = startInstant.atZone(KST); // 날짜의 야간 구간(22:00~익일 06:00) 접촉 구간 겹침 체크
ZonedDateTime endKst = endInstant.atZone(KST);
// 접촉 구간 모든 날짜에 대해 야간 시간대 겹침 체크
LocalDate day = startKst.toLocalDate(); LocalDate day = startKst.toLocalDate();
LocalDate lastDay = endKst.toLocalDate().plusDays(1); while (!day.isAfter(endKst.toLocalDate())) {
ZonedDateTime nightStart = day.atTime(22, 0).atZone(KST);
while (!day.isAfter(lastDay)) { ZonedDateTime nightEnd = day.plusDays(1).atTime(6, 0).atZone(KST);
// 해당 날짜의 야간: 전날 22:00 ~ 당일 06:00 if (startKst.isBefore(nightEnd) && endKst.isAfter(nightStart)) {
ZonedDateTime nightStart = day.atTime(LocalTime.of(22, 0)).atZone(KST).minusDays(1);
ZonedDateTime nightEnd = day.atTime(LocalTime.of(6, 0)).atZone(KST);
// 당일 22:00 ~ 다음날 06:00
ZonedDateTime nightStart2 = day.atTime(LocalTime.of(22, 0)).atZone(KST);
ZonedDateTime nightEnd2 = day.plusDays(1).atTime(LocalTime.of(6, 0)).atZone(KST);
if (isOverlapping(startKst, endKst, nightStart, nightEnd)
|| isOverlapping(startKst, endKst, nightStart2, nightEnd2)) {
return true; return true;
} }
day = day.plusDays(1); day = day.plusDays(1);
@ -388,11 +423,6 @@ public class VesselContactService {
return false; return false;
} }
private boolean isOverlapping(ZonedDateTime s1, ZonedDateTime e1,
ZonedDateTime s2, ZonedDateTime e2) {
return s1.isBefore(e2) && s2.isBefore(e1);
}
// 추정 속도 계산 // 추정 속도 계산
/** /**
@ -424,16 +454,16 @@ public class VesselContactService {
return totalHours > 0 ? totalDistNm / totalHours : 0.0; return totalHours > 0 ? totalDistNm / totalHours : 0.0;
} }
// Haversine 거리 계산 // 거리 계산
private double haversineMeters(double lat1, double lon1, double lat2, double lon2) { /**
* Equirectangular 근사 접촉 거리 판정용 (10km 이내 오차 < 0.1%)
* Haversine 대비 ~2배 빠름 (Math.cos 1회 + Math.sqrt 1회)
*/
private double equirectangularMeters(double lat1, double lon1, double lat2, double lon2) {
double dLat = Math.toRadians(lat2 - lat1); double dLat = Math.toRadians(lat2 - lat1);
double dLon = Math.toRadians(lon2 - lon1); double dLon = Math.toRadians(lon2 - lon1) * Math.cos(Math.toRadians((lat1 + lat2) / 2));
double a = Math.sin(dLat / 2) * Math.sin(dLat / 2) return EARTH_RADIUS_M * Math.sqrt(dLat * dLat + dLon * dLon);
+ Math.cos(Math.toRadians(lat1)) * Math.cos(Math.toRadians(lat2))
* Math.sin(dLon / 2) * Math.sin(dLon / 2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return EARTH_RADIUS_M * c;
} }
private double haversineNm(double lat1, double lon1, double lat2, double lon2) { private double haversineNm(double lat1, double lon1, double lat2, double lon2) {

파일 보기

@ -72,10 +72,10 @@ public class SequentialPassageController {
.collect(Collectors.toList()); .collect(Collectors.toList());
results = trackingService.findSequentialGridPassages( results = trackingService.findSequentialGridPassages(
haeguNumbers, request.getStartTime(), request.getEndTime()); haeguNumbers, request.getStartTime(), request.getEndTime(), request.isChnPrmShipOnly());
} else { } else {
results = trackingService.findSequentialAreaPassages( results = trackingService.findSequentialAreaPassages(
request.getZoneIds(), request.getStartTime(), request.getEndTime()); request.getZoneIds(), request.getStartTime(), request.getEndTime(), request.isChnPrmShipOnly());
} }
// 응답 구성 // 응답 구성

파일 보기

@ -57,6 +57,10 @@ public class SequentialPassageRequest {
@Schema(description = "순차 통과 여부 (true: 순서대로 통과, false: 모든 구역 통과)", example = "true", defaultValue = "true") @Schema(description = "순차 통과 여부 (true: 순서대로 통과, false: 모든 구역 통과)", example = "true", defaultValue = "true")
@Builder.Default @Builder.Default
private Boolean sequentialOnly = true; private Boolean sequentialOnly = true;
@Schema(description = "true 시 중국허가선박만 대상으로 순차 통과 조회", example = "false")
@Builder.Default
private boolean chnPrmShipOnly = false;
public enum PassageType { public enum PassageType {
GRID, AREA GRID, AREA

파일 보기

@ -1,5 +1,6 @@
package gc.mda.signal_batch.domain.passage.service; package gc.mda.signal_batch.domain.passage.service;
import gc.mda.signal_batch.batch.reader.ChnPrmShipProperties;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.jdbc.core.JdbcTemplate;
@ -8,8 +9,10 @@ import org.springframework.stereotype.Service;
import javax.sql.DataSource; import javax.sql.DataSource;
import java.sql.Timestamp; import java.sql.Timestamp;
import java.time.LocalDateTime; import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Set;
/** /**
* 순차 구역 통과 선박 조회 최적화 서비스 * 순차 구역 통과 선박 조회 최적화 서비스
@ -22,120 +25,140 @@ import java.util.Map;
public class SequentialAreaTrackingService { public class SequentialAreaTrackingService {
private final DataSource queryDataSource; private final DataSource queryDataSource;
private final ChnPrmShipProperties chnPrmShipProperties;
public SequentialAreaTrackingService(@Qualifier("queryDataSource") DataSource queryDataSource) { public SequentialAreaTrackingService(@Qualifier("queryDataSource") DataSource queryDataSource,
ChnPrmShipProperties chnPrmShipProperties) {
this.queryDataSource = queryDataSource; this.queryDataSource = queryDataSource;
this.chnPrmShipProperties = chnPrmShipProperties;
} }
/** /**
* 순차적으로 지정된 구역들을 통과한 선박 조회 (Grid) * 순차적으로 지정된 구역들을 통과한 선박 조회 (Grid)
* 동적 N-구역 SQL JOIN 생성 (2~10개)
*/ */
public List<Map<String, Object>> findSequentialGridPassages( public List<Map<String, Object>> findSequentialGridPassages(
List<Integer> haeguNumbers, List<Integer> haeguNumbers,
LocalDateTime startTime, LocalDateTime startTime,
LocalDateTime endTime) { LocalDateTime endTime,
boolean chnPrmShipOnly) {
int n = haeguNumbers.size();
if (n < 2 || n > 10) {
throw new IllegalArgumentException("구역은 2~10개까지 지정 가능합니다: " + n);
}
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource); JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
// MATERIALIZED CTE 사용으로 중간 결과 고정 StringBuilder sql = new StringBuilder();
String sql = """ sql.append("WITH vessel_passages AS (\n");
WITH vessel_passages AS ( sql.append(" SELECT DISTINCT mmsi, haegu_no,\n");
SELECT DISTINCT sql.append(" FIRST_VALUE(time_bucket) OVER (PARTITION BY mmsi, haegu_no ORDER BY time_bucket) as entry_time,\n");
mmsi, sql.append(" LAST_VALUE(time_bucket) OVER (PARTITION BY mmsi, haegu_no ORDER BY time_bucket ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as exit_time\n");
haegu_no, sql.append(" FROM signal.t_grid_vessel_tracks\n");
FIRST_VALUE(time_bucket) OVER ( sql.append(" WHERE time_bucket BETWEEN ? AND ?\n");
PARTITION BY mmsi, haegu_no sql.append(" AND haegu_no = ANY(ARRAY[?]::integer[])\n");
ORDER BY time_bucket if (chnPrmShipOnly) {
) as entry_time, sql.append(" AND mmsi = ANY(ARRAY[?]::varchar[])\n");
LAST_VALUE(time_bucket) OVER ( }
PARTITION BY mmsi, haegu_no sql.append(")\n");
ORDER BY time_bucket
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING // SELECT 컬럼 동적 생성
) as exit_time sql.append("SELECT v1.mmsi");
FROM signal.t_grid_vessel_tracks for (int i = 1; i <= n; i++) {
WHERE time_bucket BETWEEN ? AND ? sql.append(String.format(", v%d.entry_time as haegu%d_entry, v%d.exit_time as haegu%d_exit", i, i, i, i));
AND haegu_no = ANY(ARRAY[?]::integer[]) }
) sql.append("\nFROM vessel_passages v1\n");
SELECT
v1.mmsi, // JOIN 동적 생성 (v2~vN)
v1.entry_time as haegu1_entry, for (int i = 2; i <= n; i++) {
v1.exit_time as haegu1_exit, sql.append(String.format("JOIN vessel_passages v%d ON v%d.mmsi = v1.mmsi AND v%d.haegu_no = ? AND v%d.entry_time > v%d.exit_time\n",
v2.entry_time as haegu2_entry, i, i, i, i, i - 1));
v2.exit_time as haegu2_exit, }
v3.entry_time as haegu3_entry, sql.append("WHERE v1.haegu_no = ?\n");
v3.exit_time as haegu3_exit sql.append("ORDER BY v1.entry_time");
FROM vessel_passages v1
JOIN vessel_passages v2 ON v1.mmsi = v2.mmsi // 파라미터 구성
AND v2.haegu_no = ? AND v2.entry_time > v1.exit_time List<Object> params = new ArrayList<>();
JOIN vessel_passages v3 ON v2.mmsi = v3.mmsi params.add(Timestamp.valueOf(startTime));
AND v3.haegu_no = ? AND v3.entry_time > v2.exit_time params.add(Timestamp.valueOf(endTime));
WHERE v1.haegu_no = ? params.add(haeguNumbers.toArray(Integer[]::new));
ORDER BY v1.entry_time if (chnPrmShipOnly) {
"""; Set<String> mmsiSet = chnPrmShipProperties.getMmsiSet();
params.add(mmsiSet.toArray(String[]::new));
return jdbcTemplate.queryForList(sql, }
Timestamp.valueOf(startTime), // v2~vN의 haegu_no 파라미터
Timestamp.valueOf(endTime), for (int i = 1; i < n; i++) {
haeguNumbers.toArray(Integer[]::new), params.add(haeguNumbers.get(i));
haeguNumbers.get(1), }
haeguNumbers.get(2), // v1의 haegu_no WHERE 조건
haeguNumbers.get(0) params.add(haeguNumbers.get(0));
);
return jdbcTemplate.queryForList(sql.toString(), params.toArray());
} }
/** /**
* 순차적으로 지정된 구역들을 통과한 선박 조회 (Area) * 순차적으로 지정된 구역들을 통과한 선박 조회 (Area)
* 동적 N-구역 SQL JOIN 생성 (2~10개)
*/ */
public List<Map<String, Object>> findSequentialAreaPassages( public List<Map<String, Object>> findSequentialAreaPassages(
List<String> areaIds, List<String> areaIds,
LocalDateTime startTime, LocalDateTime startTime,
LocalDateTime endTime) { LocalDateTime endTime,
boolean chnPrmShipOnly) {
int n = areaIds.size();
if (n < 2 || n > 10) {
throw new IllegalArgumentException("구역은 2~10개까지 지정 가능합니다: " + n);
}
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource); JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
String sql = """ StringBuilder sql = new StringBuilder();
WITH area_passages AS ( sql.append("WITH area_passages AS (\n");
SELECT DISTINCT sql.append(" SELECT DISTINCT mmsi, area_id,\n");
mmsi, sql.append(" FIRST_VALUE(time_bucket) OVER (PARTITION BY mmsi, area_id ORDER BY time_bucket) as entry_time,\n");
area_id, sql.append(" LAST_VALUE(time_bucket) OVER (PARTITION BY mmsi, area_id ORDER BY time_bucket ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as exit_time\n");
FIRST_VALUE(time_bucket) OVER ( sql.append(" FROM signal.t_area_vessel_tracks\n");
PARTITION BY mmsi, area_id sql.append(" WHERE time_bucket BETWEEN ? AND ?\n");
ORDER BY time_bucket sql.append(" AND area_id = ANY(ARRAY[?]::varchar[])\n");
) as entry_time, if (chnPrmShipOnly) {
LAST_VALUE(time_bucket) OVER ( sql.append(" AND mmsi = ANY(ARRAY[?]::varchar[])\n");
PARTITION BY mmsi, area_id }
ORDER BY time_bucket sql.append(")\n");
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) as exit_time // SELECT 컬럼 동적 생성
FROM signal.t_area_vessel_tracks sql.append("SELECT a1.mmsi");
WHERE time_bucket BETWEEN ? AND ? for (int i = 1; i <= n; i++) {
AND area_id = ANY(ARRAY[?]::varchar[]) sql.append(String.format(", a%d.entry_time as area%d_entry, a%d.exit_time as area%d_exit", i, i, i, i));
) }
SELECT sql.append("\nFROM area_passages a1\n");
a1.mmsi,
a1.entry_time as area1_entry, // JOIN 동적 생성 (a2~aN)
a1.exit_time as area1_exit, for (int i = 2; i <= n; i++) {
a2.entry_time as area2_entry, sql.append(String.format("JOIN area_passages a%d ON a%d.mmsi = a1.mmsi AND a%d.area_id = ? AND a%d.entry_time > a%d.exit_time\n",
a2.exit_time as area2_exit, i, i, i, i, i - 1));
a3.entry_time as area3_entry, }
a3.exit_time as area3_exit sql.append("WHERE a1.area_id = ?\n");
FROM area_passages a1 sql.append("ORDER BY a1.entry_time");
JOIN area_passages a2 ON a1.mmsi = a2.mmsi
AND a2.area_id = ? AND a2.entry_time > a1.exit_time // 파라미터 구성
JOIN area_passages a3 ON a2.mmsi = a3.mmsi List<Object> params = new ArrayList<>();
AND a3.area_id = ? AND a3.entry_time > a2.exit_time params.add(Timestamp.valueOf(startTime));
WHERE a1.area_id = ? params.add(Timestamp.valueOf(endTime));
ORDER BY a1.entry_time params.add(areaIds.toArray(String[]::new));
"""; if (chnPrmShipOnly) {
Set<String> mmsiSet = chnPrmShipProperties.getMmsiSet();
return jdbcTemplate.queryForList(sql, params.add(mmsiSet.toArray(String[]::new));
Timestamp.valueOf(startTime), }
Timestamp.valueOf(endTime), // a2~aN의 area_id 파라미터
areaIds.toArray(String[]::new), for (int i = 1; i < n; i++) {
areaIds.get(1), params.add(areaIds.get(i));
areaIds.get(2), }
areaIds.get(0) // a1의 area_id WHERE 조건
); params.add(areaIds.get(0));
return jdbcTemplate.queryForList(sql.toString(), params.toArray());
} }
/** /**

파일 보기

@ -0,0 +1,61 @@
package gc.mda.signal_batch.domain.vessel.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* 최근 선박 위치 상세 조회 요청
*
* 공간 필터 사용법:
* - 폴리곤/사각형: coordinates에 닫힌 좌표 배열 전달
* - : center + radiusNm 전달 (서버에서 64점 폴리곤으로 변환)
* - 전체 조회: coordinates와 center 모두 null
*/
@Getter
@Builder
@NoArgsConstructor
@AllArgsConstructor
@Schema(description = "최근 선박 위치 상세 조회 요청 (공간 필터 지원)")
public class RecentPositionDetailRequest {
@Schema(description = "조회 시간 범위 (분 단위, 1~1440)", example = "5")
@Builder.Default
private int minutes = 5;
@Schema(description = "폴리곤/사각형 좌표 배열 [[lon,lat],...] — 첫점과 끝점 동일",
example = "[[125,33],[130,33],[130,37],[125,37],[125,33]]")
private List<double[]> coordinates;
@Schema(description = "원 중심 좌표 [lon, lat]", example = "[129, 35]")
private double[] center;
@Schema(description = "원 반경 (해리, NM)", example = "50")
private Double radiusNm;
/**
* 공간 필터가 지정되었는지 확인
*/
public boolean hasSpatialFilter() {
return (coordinates != null && !coordinates.isEmpty())
|| (center != null && radiusNm != null);
}
/**
* 원형 필터인지 확인
*/
public boolean isCircleFilter() {
return center != null && center.length == 2 && radiusNm != null;
}
/**
* 폴리곤/사각형 필터인지 확인
*/
public boolean isPolygonFilter() {
return coordinates != null && coordinates.size() >= 4;
}
}

파일 보기

@ -0,0 +1,87 @@
package gc.mda.signal_batch.domain.vessel.dto;
import com.fasterxml.jackson.annotation.JsonFormat;
import com.fasterxml.jackson.annotation.JsonInclude;
import io.swagger.v3.oas.annotations.media.Schema;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 최근 선박 위치 상세 응답
*
* 기존 RecentVesselPositionDto 전체 필드 + AIS 상세 정보 확장
*/
@JsonInclude(JsonInclude.Include.NON_NULL)
@Schema(description = "최근 선박 위치 상세 정보 (AIS 확장 필드 포함)")
public record RecentPositionDetailResponse(
// 기존 필드 (RecentVesselPositionDto 호환)
@Schema(description = "MMSI", example = "440113620")
String mmsi,
@Schema(description = "IMO 번호", example = "9141833")
Long imo,
@Schema(description = "경도 (WGS84)", example = "127.0638")
Double lon,
@Schema(description = "위도 (WGS84)", example = "34.227527")
Double lat,
@Schema(description = "대지속도 (knots)", example = "10.4")
BigDecimal sog,
@Schema(description = "대지침로 (도)", example = "215.3")
BigDecimal cog,
@Schema(description = "선박명", example = "SAM SUNG 2HO")
String shipNm,
@Schema(description = "선박 유형 (AIS ship type)", example = "74")
String shipTy,
@Schema(description = "선박 종류 코드", example = "000023")
String shipKindCode,
@Schema(description = "국가 코드 (MID 기반)", example = "KR")
String nationalCode,
@Schema(description = "최종 업데이트 시간", example = "2026-03-17 12:05:00")
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss")
LocalDateTime lastUpdate,
@Schema(description = "선박 사진 썸네일 경로")
String shipImagePath,
@Schema(description = "선박 사진 수")
Integer shipImageCount,
// 확장 필드 (AIS 상세)
@Schema(description = "침로 (0~360도)", example = "215.0")
Double heading,
@Schema(description = "호출 부호", example = "HLBQ")
String callSign,
@Schema(description = "항해 상태", example = "Under way using engine")
String status,
@Schema(description = "목적지", example = "BUSAN")
String destination,
@Schema(description = "도착 예정시간", example = "2026-03-18 08:00:00")
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss")
LocalDateTime eta,
@Schema(description = "흘수 (m)", example = "6.5")
Double draught,
@Schema(description = "선박 길이 (m)", example = "180")
Integer length,
@Schema(description = "선박 폭 (m)", example = "28")
Integer width
) {}

파일 보기

@ -0,0 +1,189 @@
package gc.mda.signal_batch.domain.vessel.service;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.domain.ship.service.ShipImageService;
import gc.mda.signal_batch.domain.ship.service.ShipImageService.ShipImageSummary;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailRequest;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailResponse;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*;
import org.locationtech.jts.geom.prep.PreparedGeometry;
import org.locationtech.jts.geom.prep.PreparedGeometryFactory;
import org.springframework.stereotype.Service;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.time.LocalDateTime;
import java.time.OffsetDateTime;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
/**
* 최근 선박 위치 상세 조회 서비스
*
* AisTargetCacheManager(~33K, 1분 갱신)에서 직접 조회하여
* 시간 필터 + 공간 필터(폴리곤/) 적용 상세 정보 반환
*/
@Slf4j
@Service
@RequiredArgsConstructor
public class VesselPositionDetailService {
private final AisTargetCacheManager aisTargetCacheManager;
private final ShipImageService shipImageService;
private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory();
private static final int CIRCLE_POINTS = 64;
private static final double EARTH_RADIUS_NM = 3440.065;
private static final ZoneId KST = ZoneId.of("Asia/Seoul");
/**
* 최근 선박 위치 상세 조회
*/
public List<RecentPositionDetailResponse> getRecentPositionsDetail(RecentPositionDetailRequest request) {
long startMs = System.currentTimeMillis();
Collection<AisTargetEntity> allEntities = aisTargetCacheManager.getAllValues();
OffsetDateTime threshold = OffsetDateTime.now().minusMinutes(request.getMinutes());
// 공간 필터 준비 (null이면 전체)
PreparedGeometry spatialFilter = buildSpatialFilter(request);
// 단일 루프: 시간 필터 + 공간 필터 + 변환
List<RecentPositionDetailResponse> results = new ArrayList<>(1000);
Coordinate reusable = new Coordinate();
for (AisTargetEntity entity : allEntities) {
// 시간 필터
if (entity.getMessageTimestamp() == null || entity.getMessageTimestamp().isBefore(threshold)) {
continue;
}
// 위치 필수
if (entity.getLat() == null || entity.getLon() == null) {
continue;
}
// 공간 필터
if (spatialFilter != null) {
reusable.x = entity.getLon();
reusable.y = entity.getLat();
Point point = GEOMETRY_FACTORY.createPoint(reusable);
if (!spatialFilter.contains(point)) {
continue;
}
}
results.add(toResponse(entity));
}
log.debug("recent-positions-detail: {}건 / {}ms (전체: {}, minutes: {})",
results.size(), System.currentTimeMillis() - startMs,
allEntities.size(), request.getMinutes());
return results;
}
/**
* 요청에서 공간 필터(PreparedGeometry) 생성
*/
private PreparedGeometry buildSpatialFilter(RecentPositionDetailRequest request) {
if (!request.hasSpatialFilter()) {
return null;
}
Polygon polygon;
if (request.isCircleFilter()) {
polygon = createCirclePolygon(
request.getCenter()[0], request.getCenter()[1],
request.getRadiusNm());
} else if (request.isPolygonFilter()) {
polygon = createPolygonFromCoordinates(request.getCoordinates());
} else {
return null;
}
return PreparedGeometryFactory.prepare(polygon);
}
/**
* 좌표 배열 JTS Polygon
*/
private Polygon createPolygonFromCoordinates(List<double[]> coordinates) {
Coordinate[] coords = new Coordinate[coordinates.size()];
for (int i = 0; i < coordinates.size(); i++) {
double[] c = coordinates.get(i);
coords[i] = new Coordinate(c[0], c[1]);
}
return GEOMETRY_FACTORY.createPolygon(coords);
}
/**
* 64점 폴리곤 변환 (equirectangular 근사)
*/
private Polygon createCirclePolygon(double centerLon, double centerLat, double radiusNm) {
double radiusRad = radiusNm / EARTH_RADIUS_NM;
double cosLat = Math.cos(Math.toRadians(centerLat));
Coordinate[] coords = new Coordinate[CIRCLE_POINTS + 1];
for (int i = 0; i < CIRCLE_POINTS; i++) {
double angle = 2.0 * Math.PI * i / CIRCLE_POINTS;
double dLat = Math.toDegrees(radiusRad * Math.cos(angle));
double dLon = Math.toDegrees(radiusRad * Math.sin(angle) / cosLat);
coords[i] = new Coordinate(centerLon + dLon, centerLat + dLat);
}
coords[CIRCLE_POINTS] = coords[0]; // 닫기
return GEOMETRY_FACTORY.createPolygon(coords);
}
/**
* AisTargetEntity RecentPositionDetailResponse 변환
*/
private RecentPositionDetailResponse toResponse(AisTargetEntity e) {
String mmsi = e.getMmsi();
String nationalCode = mmsi != null && mmsi.length() >= 3 ? mmsi.substring(0, 3) : "000";
String shipKindCode = e.getSignalKindCode() != null ? e.getSignalKindCode() : "000027";
Long imo = e.getImo() != null && e.getImo() > 0 ? e.getImo() : null;
// ShipImage enrichment
ShipImageSummary img = shipImageService.getImageSummary(imo);
return new RecentPositionDetailResponse(
mmsi,
imo,
round6(e.getLon()),
round6(e.getLat()),
scaleDecimal(e.getSog(), 1),
scaleDecimal(e.getCog(), 1),
e.getName(),
e.getVesselType(),
shipKindCode,
nationalCode,
toLocalDateTime(e.getMessageTimestamp()),
img != null ? img.thumbnailPath() : null,
img != null ? img.imageCount() : null,
// 확장 필드
e.getHeading(),
e.getCallsign(),
e.getStatus(),
e.getDestination(),
toLocalDateTime(e.getEta()),
e.getDraught(),
e.getLength(),
e.getWidth()
);
}
private static Double round6(Double value) {
return value != null ? Math.round(value * 1_000_000) / 1_000_000.0 : null;
}
private static BigDecimal scaleDecimal(Double value, int scale) {
return value != null ? BigDecimal.valueOf(value).setScale(scale, RoundingMode.HALF_UP) : null;
}
private static LocalDateTime toLocalDateTime(OffsetDateTime odt) {
return odt != null ? odt.atZoneSameInstant(KST).toLocalDateTime() : null;
}
}

파일 보기

@ -3,7 +3,6 @@ package gc.mda.signal_batch.domain.vessel.service;
import gc.mda.signal_batch.domain.ship.service.ShipImageService; import gc.mda.signal_batch.domain.ship.service.ShipImageService;
import gc.mda.signal_batch.domain.ship.service.ShipImageService.ShipImageSummary; import gc.mda.signal_batch.domain.ship.service.ShipImageService.ShipImageSummary;
import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto; import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto;
import gc.mda.signal_batch.global.util.SignalKindCode;
import lombok.RequiredArgsConstructor; import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@ -124,6 +123,7 @@ public class VesselPositionService {
cog, cog,
name as ship_nm, name as ship_nm,
vessel_type as ship_ty, vessel_type as ship_ty,
signal_kind_code,
last_update last_update
FROM signal.t_ais_position FROM signal.t_ais_position
WHERE last_update >= NOW() - INTERVAL '%d minutes' WHERE last_update >= NOW() - INTERVAL '%d minutes'
@ -145,8 +145,9 @@ public class VesselPositionService {
String mmsi = rs.getString("mmsi"); String mmsi = rs.getString("mmsi");
String shipTy = rs.getString("ship_ty"); String shipTy = rs.getString("ship_ty");
// shipKindCode 계산 (vesselType 기반, extraInfo 없음) // shipKindCode: DB에 저장된 치환값 사용
String shipKindCode = SignalKindCode.resolve(shipTy, null).getCode(); String signalKindCode = rs.getString("signal_kind_code");
String shipKindCode = signalKindCode != null ? signalKindCode : "000027";
// nationalCode 계산 (MMSI 3자리 = MID) // nationalCode 계산 (MMSI 3자리 = MID)
String nationalCode = mmsi != null && mmsi.length() >= 3 String nationalCode = mmsi != null && mmsi.length() >= 3

파일 보기

@ -12,7 +12,7 @@ import org.springframework.web.reactive.function.client.WebClient;
* *
* API: POST /AisSvc.svc/AIS/GetTargetsEnhanced * API: POST /AisSvc.svc/AIS/GetTargetsEnhanced
* 인증: Basic Authentication * 인증: Basic Authentication
* 버퍼: 50MB (AIS GetTargets 응답 ~20MB+) * 버퍼: 100MB (AIS GetTargets 응답 ~20MB+, 피크 50MB 초과 대응)
*/ */
@Slf4j @Slf4j
@Configuration @Configuration
@ -37,7 +37,7 @@ public class AisApiWebClientConfig {
.defaultHeaders(headers -> headers.setBasicAuth(aisApiUsername, aisApiPassword)) .defaultHeaders(headers -> headers.setBasicAuth(aisApiUsername, aisApiPassword))
.codecs(configurer -> configurer .codecs(configurer -> configurer
.defaultCodecs() .defaultCodecs()
.maxInMemorySize(50 * 1024 * 1024)) .maxInMemorySize(100 * 1024 * 1024))
.build(); .build();
} }
} }

파일 보기

@ -0,0 +1,45 @@
package gc.mda.signal_batch.global.config;
import lombok.Getter;
import lombok.Setter;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
/**
* 항적 데이터 메모리 예산 설정
*
* 64GB JVM 기준 파티셔닝:
* 캐시 35GB (55%) L1/L2/L3
* 쿼리 20GB (31%) REST/WebSocket 동시 쿼리
* 시스템 9GB (14%) GC, 스레드스택, Spring 컨텍스트 (미추적)
*/
@Getter
@Setter
@Component
@ConfigurationProperties(prefix = "track.memory-budget")
public class TrackMemoryBudgetProperties {
/** 전체 JVM 힙 예산 (GB) */
private int totalBudgetGb = 64;
/** 캐시 전용 예산 (GB) — L1+L2+L3 전체 */
private int cacheBudgetGb = 35;
/** 쿼리 응답 전용 예산 (GB) */
private int queryBudgetGb = 20;
/** 단일 쿼리 최대 메모리 (GB) */
private int maxSingleQueryGb = 5;
/** 메모리 추정 보정 계수 (실측 기반) */
private double estimationCorrectionFactor = 1.8;
/** 쿼리 메모리 대기 큐 타임아웃 (초) */
private int queueTimeoutSeconds = 60;
/** 예산 경고 임계값 (0.0~1.0) */
private double warningThreshold = 0.8;
/** 예산 위험 임계값 (0.0~1.0) */
private double criticalThreshold = 0.95;
}

파일 보기

@ -22,8 +22,10 @@ import org.springframework.http.server.ServletServerHttpRequest;
import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.servlet.http.Cookie;
import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletRequest;
import java.security.Principal; import java.security.Principal;
import java.util.Base64;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.UUID; import java.util.UUID;
@ -180,11 +182,18 @@ public class WebSocketStompConfig implements WebSocketMessageBrokerConfigurer {
String clientIp = extractClientIp(request); String clientIp = extractClientIp(request);
attributes.put("CLIENT_IP", clientIp); attributes.put("CLIENT_IP", clientIp);
// User-Agent 추출
if (request instanceof ServletServerHttpRequest) { if (request instanceof ServletServerHttpRequest) {
HttpServletRequest servletRequest = ((ServletServerHttpRequest) request).getServletRequest(); HttpServletRequest servletRequest = ((ServletServerHttpRequest) request).getServletRequest();
// User-Agent 추출
String userAgent = servletRequest.getHeader("User-Agent"); String userAgent = servletRequest.getHeader("User-Agent");
attributes.put("USER_AGENT", userAgent); attributes.put("USER_AGENT", userAgent);
// GC_SESSION 쿠키에서 JWT email 추출 (guide 서비스 인증)
String clientId = extractEmailFromJwtCookie(servletRequest);
if (clientId != null) {
attributes.put("CLIENT_ID", clientId);
}
} }
return true; return true;
@ -225,5 +234,45 @@ public class WebSocketStompConfig implements WebSocketMessageBrokerConfigurer {
// ServletServerHttpRequest가 아닌 경우 기본값 // ServletServerHttpRequest가 아닌 경우 기본값
return "unknown"; return "unknown";
} }
private String extractEmailFromJwtCookie(HttpServletRequest request) {
return extractClientIdFromRequest(request);
}
}
/**
* GC_SESSION 쿠키에서 JWT payload의 email 클레임 추출 (REST/WebSocket 공용).
* JWT 검증은 nginx auth_request에서 이미 완료 여기서는 payload 디코딩만 수행.
*/
public static String extractClientIdFromRequest(HttpServletRequest request) {
Cookie[] cookies = request.getCookies();
if (cookies == null) return null;
String token = null;
for (Cookie cookie : cookies) {
if ("GC_SESSION".equals(cookie.getName())) {
token = cookie.getValue();
break;
}
}
if (token == null || token.isEmpty()) return null;
try {
String[] parts = token.split("\\.");
if (parts.length < 2) return null;
String payload = new String(Base64.getUrlDecoder().decode(parts[1]));
int emailIdx = payload.indexOf("\"email\"");
if (emailIdx < 0) return null;
int colonIdx = payload.indexOf(':', emailIdx);
int quoteStart = payload.indexOf('"', colonIdx + 1);
int quoteEnd = payload.indexOf('"', quoteStart + 1);
if (quoteStart < 0 || quoteEnd < 0) return null;
return payload.substring(quoteStart + 1, quoteEnd);
} catch (Exception e) {
return null;
}
} }
} }

파일 보기

@ -0,0 +1,16 @@
package gc.mda.signal_batch.global.exception;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
/**
* 메모리 예산 초과 발생하는 예외 (503 Service Unavailable)
*
* 단일 쿼리 상한 초과, 대기 타임아웃, 전체 쿼리 예산 부족 발생.
*/
@ResponseStatus(HttpStatus.SERVICE_UNAVAILABLE)
public class MemoryBudgetExceededException extends RuntimeException {
public MemoryBudgetExceededException(String message) {
super(message);
}
}

파일 보기

@ -6,10 +6,11 @@ import lombok.RequiredArgsConstructor;
/** /**
* MDA 선종 범례코드 * MDA 선종 범례코드
* *
* S&P Global AIS API의 vesselType + extraInfo 기반으로 * S&P Global AIS API의 vesselType + extraInfo + shipName을 기반으로
* MDA 범례코드(signalKindCode) 치환한다. * MDA 범례코드(signalKindCode) 치환한다.
* *
* ShipKindCodeConverter를 대체하며, SNP-Batch-1의 치환 로직을 이식. * 치환은 캐시 저장 (AisTargetCacheWriter) 1회만 수행하며,
* API 응답 시에는 캐시 또는 DB의 signal_kind_code를 직접 사용한다.
*/ */
@Getter @Getter
@RequiredArgsConstructor @RequiredArgsConstructor
@ -28,18 +29,32 @@ public enum SignalKindCode {
private final String koreanName; private final String koreanName;
/** /**
* vesselType + extraInfo MDA 범례코드 치환 * vesselType + extraInfo MDA 범례코드 치환 (하위 호환용)
* * shipName 기반 BUOY 검출 불가 캐시 저장 시에는 3-파라미터 버전 사용 권장.
* 치환 우선순위:
* 1. vesselType 단독 매칭 (Cargo, Tanker, Passenger, AtoN )
* 2. vesselType + extraInfo 조합 매칭 (Vessel + Fishing )
* 3. fallback DEFAULT (000027)
*/ */
public static SignalKindCode resolve(String vesselType, String extraInfo) { public static SignalKindCode resolve(String vesselType, String extraInfo) {
return resolve(vesselType, extraInfo, null);
}
/**
* vesselType + extraInfo + shipName MDA 범례코드 치환
*
* 치환 우선순위:
* 1. shipName 기반 BUOY 검출 ('.' '_' 문자가 2개 이상 부이/항로표지)
* 2. vesselType 단독 매칭 (Cargo, Tanker, Passenger )
* 3. vesselType + extraInfo 조합 매칭 (Vessel + Fishing )
* 4. fallback DEFAULT (000027)
*/
public static SignalKindCode resolve(String vesselType, String extraInfo, String shipName) {
// 1. shipName 기반 BUOY 검출: '.' 또는 '_' 문자가 2개 이상
if (hasBuoyNamePattern(shipName)) {
return BUOY;
}
String vt = normalizeOrEmpty(vesselType); String vt = normalizeOrEmpty(vesselType);
String ei = normalizeOrEmpty(extraInfo); String ei = normalizeOrEmpty(extraInfo);
// 1. vesselType 단독 매칭 // 2. vesselType 단독 매칭
switch (vt) { switch (vt) {
case "cargo": case "cargo":
return CARGO; return CARGO;
@ -48,7 +63,7 @@ public enum SignalKindCode {
case "passenger": case "passenger":
return FERRY; return FERRY;
case "aton": case "aton":
return BUOY; return DEFAULT;
case "law enforcement": case "law enforcement":
return GOV; return GOV;
case "search and rescue": case "search and rescue":
@ -60,19 +75,19 @@ public enum SignalKindCode {
} }
// vesselType 그룹 매칭 // vesselType 그룹 매칭
if (matchesAny(vt, "tug", "pilot boat", "tender", "anti pollution", "medical transport")) { if (matchesAny(vt, "pilot boat", "anti pollution", "medical transport")) {
return GOV; return GOV;
} }
if (matchesAny(vt, "high speed craft", "wing in ground-effect")) { if (matchesAny(vt, "high speed craft", "wing in ground-effect")) {
return FERRY; return FERRY;
} }
// 2. "Vessel" + extraInfo 조합 // 3. "Vessel" + extraInfo 조합
if ("vessel".equals(vt)) { if ("vessel".equals(vt)) {
return resolveVesselExtraInfo(ei); return resolveVesselExtraInfo(ei);
} }
// 3. "N/A" + extraInfo 조합 // 4. "N/A" + extraInfo 조합
if ("n/a".equals(vt)) { if ("n/a".equals(vt)) {
if (ei.startsWith("hazardous cat")) { if (ei.startsWith("hazardous cat")) {
return CARGO; return CARGO;
@ -80,7 +95,7 @@ public enum SignalKindCode {
return DEFAULT; return DEFAULT;
} }
// 4. fallback // 5. fallback
return DEFAULT; return DEFAULT;
} }
@ -91,18 +106,32 @@ public enum SignalKindCode {
if ("military operations".equals(extraInfo)) { if ("military operations".equals(extraInfo)) {
return GOV; return GOV;
} }
if (matchesAny(extraInfo, "towing", "towing (large)", "dredging/underwater ops", "diving operations")) {
return GOV;
}
if (matchesAny(extraInfo, "pleasure craft", "sailing", "n/a")) {
return FISHING;
}
if (extraInfo.startsWith("hazardous cat")) { if (extraInfo.startsWith("hazardous cat")) {
return CARGO; return CARGO;
} }
return DEFAULT; return DEFAULT;
} }
/**
* shipName에 '.' 또는 '_' 문자가 2개 이상 포함되면 부이/항로표지로 판정
*/
static boolean hasBuoyNamePattern(String shipName) {
if (shipName == null || shipName.isBlank()) {
return false;
}
int count = 0;
for (int i = 0; i < shipName.length(); i++) {
char c = shipName.charAt(i);
if (c == '.' || c == '_') {
count++;
if (count >= 2) {
return true;
}
}
}
return false;
}
private static boolean matchesAny(String value, String... candidates) { private static boolean matchesAny(String value, String... candidates) {
for (String candidate : candidates) { for (String candidate : candidates) {
if (candidate.equals(value)) { if (candidate.equals(value)) {

파일 보기

@ -0,0 +1,45 @@
package gc.mda.signal_batch.global.util;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import lombok.experimental.UtilityClass;
import java.util.List;
/**
* CompactVesselTrack의 Heap 점유량을 바이트 단위로 추정
*
* 포인트당 메모리 근거:
* double[2]: 32B (header 16B + data 16B) + ArrayList entry 8B = 40B
* String timestamp: ~48B (object 16B + char[] ~24B + ref 8B)
* Double speed: 24B (object 16B + double 8B)
* 합계: ~112B per point
*/
@UtilityClass
public class TrackMemoryEstimator {
private static final long BYTES_PER_POINT = 112L;
private static final long OBJECT_OVERHEAD = 300L;
public static long estimateTrackBytes(CompactVesselTrack track) {
if (track == null) return 0;
int points = track.getPointCount();
return OBJECT_OVERHEAD + (long) points * BYTES_PER_POINT;
}
public static long estimateListBytes(List<CompactVesselTrack> tracks) {
if (tracks == null || tracks.isEmpty()) return 0;
long total = 0;
for (CompactVesselTrack track : tracks) {
total += estimateTrackBytes(track);
}
return total;
}
/**
* 사전 추정: 평균 500포인트 기준
* days × vessels × 500 × 112B
*/
public static long estimateQueryBytes(int days, int estimatedVessels) {
return (long) days * estimatedVessels * 500 * BYTES_PER_POINT;
}
}

파일 보기

@ -122,16 +122,18 @@ public class VesselTrackToCompactConverter {
int pointCount = geometry.size(); int pointCount = geometry.size();
double avgSpeed = pointCount > 0 ? totalDistance / Math.max(1, pointCount) * 60 : 0; double avgSpeed = pointCount > 0 ? totalDistance / Math.max(1, pointCount) * 60 : 0;
// 선박 정보 설정 // 선박 정보 설정 (캐시에 이미 치환된 signalKindCode 사용)
String shipName = null; String shipName = null;
String shipType = null; String shipType = null;
String shipKindCode = null; String shipKindCode = null;
if (vesselInfo != null) { if (vesselInfo != null) {
shipName = vesselInfo.getName(); shipName = vesselInfo.getName();
shipType = vesselInfo.getVesselType(); shipType = vesselInfo.getVesselType();
shipKindCode = SignalKindCode.resolve(vesselInfo.getVesselType(), vesselInfo.getExtraInfo()).getCode(); shipKindCode = vesselInfo.getSignalKindCode() != null
? vesselInfo.getSignalKindCode()
: SignalKindCode.DEFAULT.getCode();
} else { } else {
shipKindCode = SignalKindCode.resolve(null, null).getCode(); shipKindCode = SignalKindCode.DEFAULT.getCode();
} }
String nationalCode = mmsi.length() >= 3 ? mmsi.substring(0, 3) : mmsi; String nationalCode = mmsi.length() >= 3 ? mmsi.substring(0, 3) : mmsi;

파일 보기

@ -71,6 +71,19 @@ public class StompTrackController {
} }
}; };
// 세션 속성에서 CLIENT_IP, CLIENT_ID 추출
String clientIp = null;
String clientId = null;
Map<String, Object> sessionAttrs = headerAccessor.getSessionAttributes();
if (sessionAttrs != null) {
if (sessionAttrs.containsKey("CLIENT_IP")) {
clientIp = (String) sessionAttrs.get("CLIENT_IP");
}
if (sessionAttrs.containsKey("CLIENT_ID")) {
clientId = (String) sessionAttrs.get("CLIENT_ID");
}
}
// 비동기 스트리밍 시작 - 청크 모드 체크 // 비동기 스트리밍 시작 - 청크 모드 체크
if (request.isChunkedMode()) { if (request.isChunkedMode()) {
chunkedTrackStreamingService.streamChunkedTracks( chunkedTrackStreamingService.streamChunkedTracks(
@ -78,7 +91,9 @@ public class StompTrackController {
queryId, queryId,
sessionId, sessionId,
chunk -> sendChunkedDataToUser(userId, chunk), chunk -> sendChunkedDataToUser(userId, chunk),
statusCallback statusCallback,
clientIp,
clientId
); );
} else { } else {
trackStreamingService.streamTracks( trackStreamingService.streamTracks(
@ -113,10 +128,9 @@ public class StompTrackController {
trackStreamingService.cancelQuery(queryId); trackStreamingService.cancelQuery(queryId);
chunkedTrackStreamingService.cancelQuery(queryId); chunkedTrackStreamingService.cancelQuery(queryId);
activeSessions.remove(sessionId); activeSessions.remove(sessionId);
return QueryResponse.cancelled(queryId);
} }
// 세션 없어도 취소 성공 반환 (idempotent 이미 완료/취소된 쿼리)
return QueryResponse.error(queryId, "Query not found"); return QueryResponse.cancelled(queryId);
} }
/** /**

파일 보기

@ -316,4 +316,11 @@ public class ActiveQueryManager {
public int getMaxConcurrentGlobal() { public int getMaxConcurrentGlobal() {
return maxConcurrentGlobal; return maxConcurrentGlobal;
} }
/**
* 대기열 타임아웃 ()
*/
public int getQueueTimeoutSeconds() {
return queueTimeoutSeconds;
}
} }

파일 보기

@ -117,6 +117,9 @@ public class CacheTrackSimplifier {
track.setPointCount(afterZoom); track.setPointCount(afterZoom);
// 간소화 속도 재계산 (포인트 거리/시간 기반)
recalculateSpeeds(track);
// 처음 5개 선박 상세 로그 (debug 레벨) // 처음 5개 선박 상세 로그 (debug 레벨)
if (simplifiedCount < 5) { if (simplifiedCount < 5) {
log.debug("[CacheSimplify] vessel={} original={} -> DP={} -> distTime={} -> zoom={} (avg={} kn)", log.debug("[CacheSimplify] vessel={} original={} -> DP={} -> distTime={} -> zoom={} (avg={} kn)",
@ -139,6 +142,43 @@ public class CacheTrackSimplifier {
return tracks; return tracks;
} }
// L3 캐시 저장용: DP-only 사전 간소화
/**
* DP(Douglas-Peucker) 적용하는 사전 간소화 (L3 캐시 저장용).
* 방향 변화를 보존하여 어선 조업 패턴(원형, ㄹ자) 유지.
* 거리/시간 필터는 적용하지 않아 직선 구간만 제거.
*/
public void simplifyDpOnly(List<CompactVesselTrack> tracks, double dpTolerance) {
if (tracks == null || tracks.isEmpty()) return;
long startTime = System.currentTimeMillis();
int totalOriginal = 0;
int totalAfter = 0;
int simplifiedCount = 0;
for (CompactVesselTrack track : tracks) {
if (track.getGeometry() == null || track.getGeometry().size() <= 2) continue;
int before = track.getGeometry().size();
totalOriginal += before;
applyDouglasPeucker(track, dpTolerance);
recalculateSpeeds(track);
track.setPointCount(track.getGeometry().size());
totalAfter += track.getGeometry().size();
simplifiedCount++;
}
long elapsed = System.currentTimeMillis() - startTime;
if (simplifiedCount > 0) {
double reduction = (1 - (double) totalAfter / totalOriginal) * 100;
log.info("[DpPreSimplify] {} tracks, {} -> {} pts ({}% 감소), {}ms",
simplifiedCount, totalOriginal, totalAfter, Math.round(reduction), elapsed);
}
}
// 1단계: Douglas-Peucker (ST_Simplify 대체) // 1단계: Douglas-Peucker (ST_Simplify 대체)
private void applyDouglasPeucker(CompactVesselTrack track, double tolerance) { private void applyDouglasPeucker(CompactVesselTrack track, double tolerance) {
@ -412,6 +452,55 @@ public class CacheTrackSimplifier {
if (sampledSpd != null) track.setSpeeds(sampledSpd); if (sampledSpd != null) track.setSpeeds(sampledSpd);
} }
// 간소화 속도 재계산
/**
* 간소화된 포인트 속도 재계산.
* 간소화 남은 포인트에 대해 인접 좌표 Haversine 거리/시간차로 계산.
*/
private void recalculateSpeeds(CompactVesselTrack track) {
List<double[]> geometry = track.getGeometry();
List<String> timestamps = track.getTimestamps();
if (geometry == null || geometry.size() < 2 ||
timestamps == null || timestamps.size() != geometry.size()) {
return;
}
int size = geometry.size();
List<Double> speeds = new ArrayList<>(size);
speeds.add(0.0); // 포인트는 이전 포인트가 없으므로 0
for (int i = 1; i < size; i++) {
double[] prev = geometry.get(i - 1);
double[] curr = geometry.get(i);
try {
long prevTs = parseEpochSeconds(timestamps.get(i - 1));
long currTs = parseEpochSeconds(timestamps.get(i));
double timeDiffHours = (currTs - prevTs) / 3600.0;
if (timeDiffHours > 0) {
double distNm = calculateDistance(prev[1], prev[0], curr[1], curr[0]);
speeds.add(distNm / timeDiffHours); // knots
} else {
speeds.add(0.0);
}
} catch (Exception e) {
speeds.add(0.0);
}
}
track.setSpeeds(speeds);
}
private long parseEpochSeconds(String tsStr) {
if (tsStr == null) throw new IllegalArgumentException("null timestamp");
if (tsStr.matches("\\d{10,}")) {
return Long.parseLong(tsStr);
}
return LocalDateTime.parse(tsStr, TIMESTAMP_FORMATTER)
.atZone(java.time.ZoneId.systemDefault())
.toEpochSecond();
}
// 거리 계산 (Haversine, 해리 단위) // 거리 계산 (Haversine, 해리 단위)
private double calculateDistance(double lat1, double lon1, double lat2, double lon2) { private double calculateDistance(double lat1, double lon1, double lat2, double lon2) {

파일 크기가 너무 크기때문에 변경 상태를 표시하지 않습니다. Load Diff

파일 보기

@ -40,8 +40,13 @@ public class DailyTrackCacheManager {
NOT_STARTED, LOADING, PARTIAL, READY, DISABLED NOT_STARTED, LOADING, PARTIAL, READY, DISABLED
} }
/** L3 사전 간소화 DP tolerance (~100m) — 항적 형상 유지하면서 직선 구간만 제거 */
private static final double L3_DP_TOLERANCE = 0.001;
private final DataSource queryDataSource; private final DataSource queryDataSource;
private final DailyTrackCacheProperties cacheProperties; private final DailyTrackCacheProperties cacheProperties;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final CacheTrackSimplifier cacheTrackSimplifier;
// 날짜별 캐시 (D-1 ~ D-N) // 날짜별 캐시 (D-1 ~ D-N)
private final ConcurrentHashMap<LocalDate, DailyTrackData> cache = new ConcurrentHashMap<>(); private final ConcurrentHashMap<LocalDate, DailyTrackData> cache = new ConcurrentHashMap<>();
@ -54,9 +59,13 @@ public class DailyTrackCacheManager {
public DailyTrackCacheManager( public DailyTrackCacheManager(
@Qualifier("queryDataSource") DataSource queryDataSource, @Qualifier("queryDataSource") DataSource queryDataSource,
DailyTrackCacheProperties cacheProperties) { DailyTrackCacheProperties cacheProperties,
TrackMemoryBudgetManager memoryBudgetManager,
CacheTrackSimplifier cacheTrackSimplifier) {
this.queryDataSource = queryDataSource; this.queryDataSource = queryDataSource;
this.cacheProperties = cacheProperties; this.cacheProperties = cacheProperties;
this.memoryBudgetManager = memoryBudgetManager;
this.cacheTrackSimplifier = cacheTrackSimplifier;
} }
/** /**
@ -165,13 +174,19 @@ public class DailyTrackCacheManager {
DailyTrackData data = loadDay(targetDate); DailyTrackData data = loadDay(targetDate);
if (data != null && data.getVesselCount() > 0) { if (data != null && data.getVesselCount() > 0) {
// 메모리 한도 체크 // 메모리 한도 체크 (DailyTrackCacheProperties 자체 한도)
if (totalMemory + data.getMemorySizeBytes() > maxMemoryBytes) { if (totalMemory + data.getMemorySizeBytes() > maxMemoryBytes) {
log.warn("Cache memory limit reached: {}GB / {}GB. Stopping at D-{}", log.warn("Cache memory limit reached: {}GB / {}GB. Stopping at D-{}",
totalMemory / (1024 * 1024 * 1024), cacheProperties.getMaxMemoryGb(), daysBack); totalMemory / (1024 * 1024 * 1024), cacheProperties.getMaxMemoryGb(), daysBack);
break; break;
} }
// 메모리 예산 매니저에 등록
if (!memoryBudgetManager.registerCacheMemory(targetDate, data.getMemorySizeBytes())) {
log.warn("[MemoryBudget] 캐시 예산 초과로 D-{} ({}) 로드 중단", daysBack, targetDate);
break;
}
cache.put(targetDate, data); cache.put(targetDate, data);
totalMemory += data.getMemorySizeBytes(); totalMemory += data.getMemorySizeBytes();
loadedCount++; loadedCount++;
@ -301,8 +316,9 @@ public class DailyTrackCacheManager {
double avgSpeed = acc.pointCount > 0 ? acc.totalDistance / Math.max(1, acc.pointCount) * 60 : 0; double avgSpeed = acc.pointCount > 0 ? acc.totalDistance / Math.max(1, acc.pointCount) * 60 : 0;
// shipKindCode 계산 // shipKindCode: 캐시 저장 치환된 사용 (DB fallback 포함)
String shipKindCode = SignalKindCode.resolve(acc.shipType, null).getCode(); String shipKindCode = acc.signalKindCode != null
? acc.signalKindCode : SignalKindCode.DEFAULT.getCode();
// nationalCode 계산 (MMSI 3자리 = MID) // nationalCode 계산 (MMSI 3자리 = MID)
String nationalCode = acc.mmsi.length() >= 3 ? acc.mmsi.substring(0, 3) : acc.mmsi; String nationalCode = acc.mmsi.length() >= 3 ? acc.mmsi.substring(0, 3) : acc.mmsi;
@ -327,6 +343,23 @@ public class DailyTrackCacheManager {
estimatedMemory += tracks.size() * 200L; // 객체 오버헤드 estimatedMemory += tracks.size() * 200L; // 객체 오버헤드
// DP 사전 간소화: 직선 구간만 제거, 방향 변화(어선 조업 패턴) 보존
long memoryBeforeDp = estimatedMemory;
List<CompactVesselTrack> trackList = new ArrayList<>(tracks.values());
cacheTrackSimplifier.simplifyDpOnly(trackList, L3_DP_TOLERANCE);
// 간소화 메모리 재추정
estimatedMemory = trackList.stream()
.mapToLong(t -> t.getPointCount() * 40L)
.sum();
estimatedMemory += tracks.size() * 200L; // 객체 오버헤드
if (memoryBeforeDp > 0) {
long reduction = memoryBeforeDp > 0 ? Math.round((1 - (double) estimatedMemory / memoryBeforeDp) * 100) : 0;
log.info("[DailyLoadDay] {} DP pre-simplification: {}MB -> {}MB ({}% reduction, tolerance={})",
date, memoryBeforeDp / (1024 * 1024), estimatedMemory / (1024 * 1024), reduction, L3_DP_TOLERANCE);
}
// STRtree 공간 인덱스 빌드 // STRtree 공간 인덱스 빌드
STRtree spatialIndex = buildSpatialIndex(tracks); STRtree spatialIndex = buildSpatialIndex(tracks);
estimatedMemory += tracks.size() * 100L; // 인덱스 오버헤드 estimatedMemory += tracks.size() * 100L; // 인덱스 오버헤드
@ -421,6 +454,76 @@ public class DailyTrackCacheManager {
.collect(Collectors.toList()); .collect(Collectors.toList());
} }
/**
* 요청된 MMSI 키로 직접 O(1) 조회 dayTracks.get(mmsi) 호출
* 기존 getCachedTracksMultipleDays() 전체 스캔 대비 대폭 성능 개선.
* : 7일 × 100 MMSI = 700회 get() vs 7일 × 50K 선박 = 350K 엔트리 스캔
*/
public List<CompactVesselTrack> getCachedTracksForVessels(
List<LocalDate> dates, Set<String> mmsiKeys) {
if (mmsiKeys == null || mmsiKeys.isEmpty()) {
return Collections.emptyList();
}
Map<String, CompactVesselTrack.CompactVesselTrackBuilder> merged = new HashMap<>();
int lookupCount = 0;
int hitCount = 0;
for (LocalDate date : dates) {
DailyTrackData data = cache.get(date);
if (data == null) continue;
Map<String, CompactVesselTrack> dayTracks = data.getTracks();
for (String mmsi : mmsiKeys) {
CompactVesselTrack track = dayTracks.get(mmsi);
lookupCount++;
if (track == null) continue;
hitCount++;
CompactVesselTrack.CompactVesselTrackBuilder builder = merged.get(mmsi);
if (builder == null) {
builder = CompactVesselTrack.builder()
.vesselId(mmsi)
.nationalCode(track.getNationalCode())
.shipName(track.getShipName())
.shipType(track.getShipType())
.shipKindCode(track.getShipKindCode())
.geometry(new ArrayList<>(track.getGeometry()))
.timestamps(new ArrayList<>(track.getTimestamps()))
.speeds(new ArrayList<>(track.getSpeeds()))
.totalDistance(track.getTotalDistance())
.avgSpeed(track.getAvgSpeed())
.maxSpeed(track.getMaxSpeed())
.pointCount(track.getPointCount());
merged.put(mmsi, builder);
} else {
CompactVesselTrack existing = builder.build();
List<double[]> geo = new ArrayList<>(existing.getGeometry());
geo.addAll(track.getGeometry());
List<String> ts = new ArrayList<>(existing.getTimestamps());
ts.addAll(track.getTimestamps());
List<Double> sp = new ArrayList<>(existing.getSpeeds());
sp.addAll(track.getSpeeds());
builder.geometry(geo)
.timestamps(ts)
.speeds(sp)
.totalDistance(existing.getTotalDistance() + track.getTotalDistance())
.maxSpeed(Math.max(existing.getMaxSpeed(), track.getMaxSpeed()))
.pointCount(existing.getPointCount() + track.getPointCount());
}
}
}
log.info("[CACHE-MONITOR] L3.getCachedTracksForVessels: dates={}, requestedMmsi={}, lookups={}, hits={}, resultVessels={}",
dates.size(), mmsiKeys.size(), lookupCount, hitCount, merged.size());
return merged.values().stream()
.map(CompactVesselTrack.CompactVesselTrackBuilder::build)
.collect(Collectors.toList());
}
/** /**
* 요청 범위를 캐시 구간 / DB 구간으로 분리 * 요청 범위를 캐시 구간 / DB 구간으로 분리
*/ */
@ -533,6 +636,7 @@ public class DailyTrackCacheManager {
try { try {
DailyTrackData data = loadDay(yesterday); DailyTrackData data = loadDay(yesterday);
if (data != null && data.getVesselCount() > 0) { if (data != null && data.getVesselCount() > 0) {
memoryBudgetManager.registerCacheMemory(yesterday, data.getMemorySizeBytes());
cache.put(yesterday, data); cache.put(yesterday, data);
log.info("Cache refreshed for {}: {} vessels, {} MB", log.info("Cache refreshed for {}: {} vessels, {} MB",
yesterday, data.getVesselCount(), data.getMemorySizeBytes() / (1024 * 1024)); yesterday, data.getVesselCount(), data.getMemorySizeBytes() / (1024 * 1024));
@ -550,6 +654,7 @@ public class DailyTrackCacheManager {
for (LocalDate d : toRemove) { for (LocalDate d : toRemove) {
DailyTrackData removed = cache.remove(d); DailyTrackData removed = cache.remove(d);
if (removed != null) { if (removed != null) {
memoryBudgetManager.releaseCacheMemory(d);
log.info("Evicted cache for {}: {} vessels, {} MB", log.info("Evicted cache for {}: {} vessels, {} MB",
d, removed.getVesselCount(), removed.getMemorySizeBytes() / (1024 * 1024)); d, removed.getVesselCount(), removed.getMemorySizeBytes() / (1024 * 1024));
} }
@ -642,7 +747,7 @@ public class DailyTrackCacheManager {
try (Connection conn = queryDataSource.getConnection()) { try (Connection conn = queryDataSource.getConnection()) {
String placeholders = batch.stream().map(id -> "?").collect(Collectors.joining(",")); String placeholders = batch.stream().map(id -> "?").collect(Collectors.joining(","));
String sql = "SELECT mmsi, name as ship_nm, vessel_type as ship_ty " + String sql = "SELECT mmsi, name as ship_nm, vessel_type as ship_ty, signal_kind_code " +
"FROM signal.t_ais_position " + "FROM signal.t_ais_position " +
"WHERE mmsi IN (" + placeholders + ")"; "WHERE mmsi IN (" + placeholders + ")";
@ -658,6 +763,7 @@ public class DailyTrackCacheManager {
if (acc != null) { if (acc != null) {
acc.shipName = rs.getString("ship_nm"); acc.shipName = rs.getString("ship_nm");
acc.shipType = rs.getString("ship_ty"); acc.shipType = rs.getString("ship_ty");
acc.signalKindCode = rs.getString("signal_kind_code");
enriched++; enriched++;
} }
} }
@ -701,6 +807,7 @@ public class DailyTrackCacheManager {
String mmsi; String mmsi;
String shipName; String shipName;
String shipType; String shipType;
String signalKindCode;
List<double[]> geometry = new ArrayList<>(500); List<double[]> geometry = new ArrayList<>(500);
List<String> timestamps = new ArrayList<>(500); List<String> timestamps = new ArrayList<>(500);
List<Double> speeds = new ArrayList<>(500); List<Double> speeds = new ArrayList<>(500);

파일 보기

@ -0,0 +1,300 @@
package gc.mda.signal_batch.global.websocket.service;
import gc.mda.signal_batch.global.config.TrackMemoryBudgetProperties;
import gc.mda.signal_batch.global.exception.MemoryBudgetExceededException;
import jakarta.annotation.PostConstruct;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import java.time.LocalDate;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
/**
* 항적 데이터 메모리 예산 관리자
*
* 캐시 영역과 쿼리 영역의 메모리를 논리적으로 파티셔닝하여
* 대형 쿼리가 배치 Job/캐시를 압박하는 것을 방지.
*
* 쿼리 예산: ReentrantLock(fair=true) + Condition 기반 FIFO 대기 .
* 캐시 예산: AtomicLong 기반 즉시 등록/해제.
*/
@Slf4j
@Service
public class TrackMemoryBudgetManager {
@Getter
private final TrackMemoryBudgetProperties properties;
// 캐시 메모리 추적
private final AtomicLong cacheUsedBytes = new AtomicLong(0);
private final ConcurrentHashMap<String, Long> cacheAllocations = new ConcurrentHashMap<>();
// 쿼리 메모리 추적
private final AtomicLong queryUsedBytes = new AtomicLong(0);
private final ConcurrentHashMap<String, Long> queryAllocations = new ConcurrentHashMap<>();
private final AtomicInteger waitingQueryCount = new AtomicInteger(0);
// FIFO 대기 메커니즘
private final ReentrantLock queryLock = new ReentrantLock(true); // fair=true
private final Condition queryBudgetAvailable = queryLock.newCondition();
// 로그 중복 방지
private volatile long lastPressureLogTime = 0;
public TrackMemoryBudgetManager(TrackMemoryBudgetProperties properties) {
this.properties = properties;
}
@PostConstruct
public void init() {
log.info("TrackMemoryBudgetManager 초기화 — total: {}GB, cache: {}GB, query: {}GB, maxSingleQuery: {}GB, correctionFactor: {}",
properties.getTotalBudgetGb(), properties.getCacheBudgetGb(),
properties.getQueryBudgetGb(), properties.getMaxSingleQueryGb(),
properties.getEstimationCorrectionFactor());
}
// 캐시 메모리 관리
/**
* 캐시 메모리 등록 (날짜 기반 L3 DailyTrackCache)
* @return true: 예산 등록 성공, false: 예산 초과
*/
public boolean registerCacheMemory(LocalDate date, long bytes) {
return registerCacheMemory("daily::" + date, bytes);
}
/**
* 캐시 메모리 등록 ( 기반 L1/L2 Caffeine 버킷)
*/
public boolean registerCacheMemory(String key, long bytes) {
long budgetBytes = (long) properties.getCacheBudgetGb() * 1024 * 1024 * 1024;
long currentUsed = cacheUsedBytes.get();
if (currentUsed + bytes > budgetBytes) {
log.warn("[MemoryBudget] 캐시 예산 초과: key={}, requested={}MB, used={}MB, budget={}MB",
key, bytes / (1024 * 1024), currentUsed / (1024 * 1024), budgetBytes / (1024 * 1024));
return false;
}
Long previous = cacheAllocations.put(key, bytes);
if (previous != null) {
cacheUsedBytes.addAndGet(bytes - previous);
} else {
cacheUsedBytes.addAndGet(bytes);
}
return true;
}
/**
* 캐시 메모리 해제 (날짜 기반)
*/
public void releaseCacheMemory(LocalDate date) {
releaseCacheMemory("daily::" + date);
}
/**
* 캐시 메모리 해제 ( 기반)
*/
public void releaseCacheMemory(String key) {
Long released = cacheAllocations.remove(key);
if (released != null) {
cacheUsedBytes.addAndGet(-released);
}
}
public long getAvailableCacheBudget() {
long budgetBytes = (long) properties.getCacheBudgetGb() * 1024 * 1024 * 1024;
return Math.max(0, budgetBytes - cacheUsedBytes.get());
}
// 쿼리 메모리 관리 (FIFO 대기 )
/**
* 쿼리 메모리 예약 예산 부족 FIFO 대기
*
* @param queryId 쿼리 식별자
* @param estimatedBytes 추정 메모리 (보정 raw )
* @param maxWaitMs 최대 대기 시간 (밀리초)
* @throws MemoryBudgetExceededException 단일 쿼리 상한 초과 또는 타임아웃
*/
public void reserveQueryMemory(String queryId, long estimatedBytes, long maxWaitMs) {
long correctedBytes = applyCorrection(estimatedBytes);
long maxSingleBytes = (long) properties.getMaxSingleQueryGb() * 1024 * 1024 * 1024;
// 단일 쿼리 상한 체크
if (correctedBytes > maxSingleBytes) {
throw new MemoryBudgetExceededException(
String.format("단일 쿼리 메모리 상한 초과: estimated=%dMB, max=%dMB",
correctedBytes / (1024 * 1024), maxSingleBytes / (1024 * 1024)));
}
queryLock.lock();
try {
// 즉시 예약 가능한지 확인
if (canReserveQuery(correctedBytes)) {
doReserve(queryId, correctedBytes);
return;
}
// 대기 진입
waitingQueryCount.incrementAndGet();
long deadline = System.nanoTime() + maxWaitMs * 1_000_000L;
try {
while (!canReserveQuery(correctedBytes)) {
long remainingNanos = deadline - System.nanoTime();
if (remainingNanos <= 0) {
throw new MemoryBudgetExceededException(
String.format("쿼리 메모리 대기 타임아웃: %dms, queryUsed=%dMB, budget=%dMB",
maxWaitMs, queryUsedBytes.get() / (1024 * 1024),
(long) properties.getQueryBudgetGb() * 1024));
}
queryBudgetAvailable.awaitNanos(remainingNanos);
}
doReserve(queryId, correctedBytes);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new MemoryBudgetExceededException("쿼리 메모리 대기 중 인터럽트 발생");
} finally {
waitingQueryCount.decrementAndGet();
}
} finally {
queryLock.unlock();
}
}
/**
* 쿼리 메모리 해제 대기 쿼리 시그널
*/
public void releaseQueryMemory(String queryId) {
Long released = queryAllocations.remove(queryId);
if (released != null) {
queryUsedBytes.addAndGet(-released);
queryLock.lock();
try {
queryBudgetAvailable.signalAll();
} finally {
queryLock.unlock();
}
log.debug("[MemoryBudget] 쿼리 메모리 해제: queryId={}, released={}MB, remaining={}MB",
queryId, released / (1024 * 1024), queryUsedBytes.get() / (1024 * 1024));
}
}
/**
* 쿼리 메모리 중간 업데이트 (실제 사용량이 추정과 다를 )
*/
public void updateQueryMemory(String queryId, long actualBytes) {
long corrected = applyCorrection(actualBytes);
Long previous = queryAllocations.put(queryId, corrected);
if (previous != null) {
queryUsedBytes.addAndGet(corrected - previous);
}
}
// 모니터링
/**
* 예산 현황 (모니터링 API용)
*/
public Map<String, Object> getBudgetStatus() {
Map<String, Object> status = new LinkedHashMap<>();
long cacheUsed = cacheUsedBytes.get();
long queryUsed = queryUsedBytes.get();
long totalUsed = cacheUsed + queryUsed;
long cacheBudget = (long) properties.getCacheBudgetGb() * 1024 * 1024 * 1024;
long queryBudget = (long) properties.getQueryBudgetGb() * 1024 * 1024 * 1024;
// 전체
Map<String, Object> total = new LinkedHashMap<>();
total.put("totalGb", properties.getTotalBudgetGb());
total.put("usedMb", totalUsed / (1024 * 1024));
total.put("usagePercent", String.format("%.1f", totalUsed * 100.0 / ((long) properties.getTotalBudgetGb() * 1024 * 1024 * 1024)));
total.put("status", getUsageStatus(totalUsed, (long) properties.getTotalBudgetGb() * 1024 * 1024 * 1024));
status.put("totalBudget", total);
// 캐시
Map<String, Object> cacheInfo = new LinkedHashMap<>();
cacheInfo.put("budgetGb", properties.getCacheBudgetGb());
cacheInfo.put("usedMb", cacheUsed / (1024 * 1024));
cacheInfo.put("usagePercent", cacheBudget > 0 ? String.format("%.1f", cacheUsed * 100.0 / cacheBudget) : "0.0");
cacheInfo.put("allocations", cacheAllocations.size());
status.put("cacheBudget", cacheInfo);
// 쿼리
Map<String, Object> queryInfo = new LinkedHashMap<>();
queryInfo.put("budgetGb", properties.getQueryBudgetGb());
queryInfo.put("usedMb", queryUsed / (1024 * 1024));
queryInfo.put("usagePercent", queryBudget > 0 ? String.format("%.1f", queryUsed * 100.0 / queryBudget) : "0.0");
queryInfo.put("activeReservations", queryAllocations.size());
queryInfo.put("waitingCount", waitingQueryCount.get());
status.put("queryBudget", queryInfo);
// JVM
Runtime runtime = Runtime.getRuntime();
long usedHeap = runtime.totalMemory() - runtime.freeMemory();
long maxHeap = runtime.maxMemory();
Map<String, Object> heap = new LinkedHashMap<>();
heap.put("usedMb", usedHeap / (1024 * 1024));
heap.put("maxMb", maxHeap / (1024 * 1024));
heap.put("usagePercent", String.format("%.1f", usedHeap * 100.0 / maxHeap));
status.put("heapInfo", heap);
return status;
}
public boolean isBudgetPressureHigh() {
long totalUsed = cacheUsedBytes.get() + queryUsedBytes.get();
long totalBudget = (long) properties.getTotalBudgetGb() * 1024 * 1024 * 1024;
double ratio = (double) totalUsed / totalBudget;
if (ratio >= properties.getWarningThreshold()) {
logBudgetPressure(ratio);
return true;
}
return false;
}
// 내부 메서드
private boolean canReserveQuery(long bytes) {
long budgetBytes = (long) properties.getQueryBudgetGb() * 1024 * 1024 * 1024;
return queryUsedBytes.get() + bytes <= budgetBytes;
}
private void doReserve(String queryId, long correctedBytes) {
queryAllocations.put(queryId, correctedBytes);
queryUsedBytes.addAndGet(correctedBytes);
log.debug("[MemoryBudget] 쿼리 메모리 예약: queryId={}, reserved={}MB, queryTotal={}MB",
queryId, correctedBytes / (1024 * 1024), queryUsedBytes.get() / (1024 * 1024));
}
private long applyCorrection(long rawEstimate) {
return (long) (rawEstimate * properties.getEstimationCorrectionFactor());
}
private String getUsageStatus(long used, long total) {
if (total == 0) return "UNKNOWN";
double ratio = (double) used / total;
if (ratio >= properties.getCriticalThreshold()) return "CRITICAL";
if (ratio >= properties.getWarningThreshold()) return "WARNING";
return "NORMAL";
}
private void logBudgetPressure(double ratio) {
long now = System.currentTimeMillis();
if (now - lastPressureLogTime > 5000) {
lastPressureLogTime = now;
log.warn("[MemoryBudget] 예산 압박: usage={}, cache={}MB, query={}MB, waiting={}",
String.format("%.1f%%", ratio * 100),
cacheUsedBytes.get() / (1024 * 1024),
queryUsedBytes.get() / (1024 * 1024),
waitingQueryCount.get());
}
}
}

파일 보기

@ -5,6 +5,7 @@ import gc.mda.signal_batch.batch.reader.FiveMinTrackCache;
import gc.mda.signal_batch.batch.reader.HourlyTrackCache; import gc.mda.signal_batch.batch.reader.HourlyTrackCache;
import gc.mda.signal_batch.domain.vessel.service.VesselLatestPositionCache; import gc.mda.signal_batch.domain.vessel.service.VesselLatestPositionCache;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager; import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag; import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
@ -45,6 +46,9 @@ public class CacheMonitoringController {
@Autowired(required = false) @Autowired(required = false)
private VesselLatestPositionCache latestPositionCache; private VesselLatestPositionCache latestPositionCache;
@Autowired
private TrackMemoryBudgetManager memoryBudgetManager;
/** /**
* 캐시 통계 조회 (Dashboard 표시용 전체 캐시 집계) * 캐시 통계 조회 (Dashboard 표시용 전체 캐시 집계)
*/ */
@ -189,4 +193,13 @@ public class CacheMonitoringController {
health.put("latestPosition", latestPositionCache != null ? "UP" : "DISABLED"); health.put("latestPosition", latestPositionCache != null ? "UP" : "DISABLED");
return ResponseEntity.ok(health); return ResponseEntity.ok(health);
} }
/**
* 메모리 예산 현황 (캐시 + 쿼리 파티셔닝 + JVM )
*/
@GetMapping("/budget")
@Operation(summary = "메모리 예산 현황", description = "캐시/쿼리 메모리 예산 사용량, 대기 큐, JVM 힙 정보를 조회합니다")
public ResponseEntity<Map<String, Object>> getMemoryBudgetStatus() {
return ResponseEntity.ok(memoryBudgetManager.getBudgetStatus());
}
} }

파일 보기

@ -0,0 +1,210 @@
package gc.mda.signal_batch.monitoring.controller;
import gc.mda.signal_batch.monitoring.service.QueryMetricsService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.*;
import java.util.*;
/**
* 쿼리 메트릭 조회 API
*
* WebSocket/REST 쿼리 실행 이력 성능 통계를 제공한다.
* ApiMetrics 프론트엔드 페이지의 데이터 소스.
*/
@RestController
@RequestMapping("/api/monitoring/query-metrics")
@Tag(name = "Query Metrics", description = "쿼리 실행 메트릭 조회 API")
public class QueryMetricsController {
private final QueryMetricsService queryMetricsService;
private final JdbcTemplate queryJdbcTemplate;
private static final Set<String> ALLOWED_SORT_COLUMNS = Set.of(
"created_at", "elapsed_ms", "unique_vessels", "total_points"
);
public QueryMetricsController(
QueryMetricsService queryMetricsService,
@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryMetricsService = queryMetricsService;
this.queryJdbcTemplate = queryJdbcTemplate;
}
@GetMapping
@Operation(summary = "최근 쿼리 메트릭 조회", description = "최근 N건의 쿼리 실행 메트릭을 조회합니다")
public ResponseEntity<List<Map<String, Object>>> getRecentMetrics(
@RequestParam(defaultValue = "50") int limit) {
return ResponseEntity.ok(queryMetricsService.getRecentMetrics(Math.min(limit, 200)));
}
@GetMapping("/stats")
@Operation(summary = "쿼리 메트릭 통계", description = "기간별 쿼리 성능 통계 (평균 응답시간, 캐시 비율, 느린 쿼리 등)")
public ResponseEntity<Map<String, Object>> getStats(
@RequestParam(defaultValue = "7") int days) {
return ResponseEntity.ok(queryMetricsService.getStats(Math.min(days, 90)));
}
@GetMapping("/history")
@Operation(summary = "쿼리 이력 조회 (페이지네이션)", description = "필터 + 서버사이드 페이지네이션")
public Map<String, Object> getQueryHistory(
@Parameter(description = "쿼리 유형 (WEBSOCKET, REST_V2)") @RequestParam(required = false) String queryType,
@Parameter(description = "데이터 경로 (CACHE, DB, HYBRID)") @RequestParam(required = false) String dataPath,
@Parameter(description = "상태 (COMPLETED, CANCELLED, ERROR, TIMEOUT)") @RequestParam(required = false) String status,
@Parameter(description = "응답시간 최소 (ms)") @RequestParam(required = false) Integer elapsedMsMin,
@Parameter(description = "응답시간 최대 (ms)") @RequestParam(required = false) Integer elapsedMsMax,
@Parameter(description = "페이지 번호 (0부터)") @RequestParam(defaultValue = "0") int page,
@Parameter(description = "페이지 크기") @RequestParam(defaultValue = "20") int size,
@Parameter(description = "정렬 컬럼") @RequestParam(defaultValue = "created_at") String sortBy,
@Parameter(description = "정렬 방향 (asc, desc)") @RequestParam(defaultValue = "desc") String sortDir) {
if (!ALLOWED_SORT_COLUMNS.contains(sortBy)) {
sortBy = "created_at";
}
String direction = "asc".equalsIgnoreCase(sortDir) ? "ASC" : "DESC";
size = Math.min(size, 100);
StringBuilder where = new StringBuilder("WHERE 1=1");
List<Object> params = new ArrayList<>();
if (queryType != null && !queryType.isEmpty()) {
where.append(" AND query_type = ?");
params.add(queryType);
}
if (dataPath != null && !dataPath.isEmpty()) {
where.append(" AND data_path = ?");
params.add(dataPath);
}
if (status != null && !status.isEmpty()) {
where.append(" AND status = ?");
params.add(status);
}
if (elapsedMsMin != null) {
where.append(" AND elapsed_ms >= ?");
params.add(elapsedMsMin);
}
if (elapsedMsMax != null) {
where.append(" AND elapsed_ms <= ?");
params.add(elapsedMsMax);
}
String whereClause = where.toString();
// COUNT 쿼리
String countSql = "SELECT COUNT(*) FROM signal.t_query_metrics " + whereClause;
Integer totalElements = queryJdbcTemplate.queryForObject(countSql, Integer.class, params.toArray());
if (totalElements == null) totalElements = 0;
// 데이터 쿼리
String dataSql = """
SELECT id, query_id, query_type, created_at, data_path, status,
zoom_level, requested_mmsi, unique_vessels, total_tracks,
total_points, points_after_simplify, total_chunks,
response_bytes, elapsed_ms, db_query_ms, simplify_ms,
cache_hit_days, db_query_days, client_ip, client_id
FROM signal.t_query_metrics
""" + whereClause +
" ORDER BY " + sortBy + " " + direction +
" LIMIT ? OFFSET ?";
List<Object> dataParams = new ArrayList<>(params);
dataParams.add(size);
dataParams.add(page * size);
List<Map<String, Object>> content = queryJdbcTemplate.queryForList(dataSql, dataParams.toArray());
Map<String, Object> result = new LinkedHashMap<>();
result.put("content", content);
result.put("totalElements", totalElements);
result.put("totalPages", (int) Math.ceil((double) totalElements / size));
result.put("currentPage", page);
result.put("pageSize", size);
return result;
}
@GetMapping("/summary")
@Operation(summary = "쿼리 메트릭 요약", description = "최근 N시간 요약 통계 (P95 포함)")
public Map<String, Object> getSummary(
@Parameter(description = "조회 기간 (시간)") @RequestParam(defaultValue = "24") int hours) {
String sql = """
SELECT
COUNT(*) as total_queries,
COALESCE(AVG(elapsed_ms), 0) as avg_elapsed_ms,
COALESCE(PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY elapsed_ms), 0) as p95_elapsed_ms,
COALESCE(MAX(elapsed_ms), 0) as max_elapsed_ms,
COUNT(CASE WHEN query_type = 'WEBSOCKET' THEN 1 END) as ws_count,
COUNT(CASE WHEN query_type LIKE 'REST%%' THEN 1 END) as rest_count,
COUNT(CASE WHEN data_path = 'CACHE' THEN 1 END) as cache_only_count,
COUNT(CASE WHEN data_path = 'DB' THEN 1 END) as db_only_count,
COUNT(CASE WHEN data_path = 'HYBRID' THEN 1 END) as hybrid_count,
COUNT(CASE WHEN status = 'COMPLETED' THEN 1 END) as completed_count,
COUNT(CASE WHEN status != 'COMPLETED' THEN 1 END) as failed_count,
COALESCE(AVG(unique_vessels), 0) as avg_vessels,
COALESCE(AVG(total_points), 0) as avg_points_before,
COALESCE(AVG(points_after_simplify), 0) as avg_points_after,
COALESCE(AVG(response_bytes), 0) as avg_response_size_bytes
FROM signal.t_query_metrics
WHERE created_at >= NOW() - INTERVAL '%d hours'
""".formatted(Math.min(hours, 720));
return queryJdbcTemplate.queryForMap(sql);
}
@GetMapping("/timeseries")
@Operation(summary = "쿼리 메트릭 시계열", description = "시간별/일별 버킷 집계 + Top 10 클라이언트")
public Map<String, Object> getTimeSeries(
@Parameter(description = "조회 기간 (일)") @RequestParam(defaultValue = "7") int days,
@Parameter(description = "Top 클라이언트 그룹 기준 (ip | id)") @RequestParam(defaultValue = "ip") String groupBy) {
days = Math.min(days, 90);
String granularity = days <= 7 ? "HOURLY" : "DAILY";
String bucketExpr = days <= 7 ? "DATE_TRUNC('hour', created_at)" : "DATE(created_at)";
String bucketSql = """
SELECT %s AS bucket,
COUNT(*) AS query_count,
COALESCE(AVG(elapsed_ms), 0) AS avg_elapsed_ms,
COALESCE(MAX(elapsed_ms), 0) AS max_elapsed_ms,
COALESCE(AVG(response_bytes), 0) AS avg_response_bytes,
COUNT(CASE WHEN query_type = 'WEBSOCKET' THEN 1 END) AS ws_count,
COUNT(CASE WHEN query_type LIKE 'REST%%' THEN 1 END) AS rest_count,
COUNT(CASE WHEN data_path = 'CACHE' THEN 1 END) AS cache_count,
COUNT(CASE WHEN data_path = 'DB' THEN 1 END) AS db_count,
COUNT(CASE WHEN data_path = 'HYBRID' THEN 1 END) AS hybrid_count
FROM signal.t_query_metrics
WHERE created_at >= NOW() - INTERVAL '%d days'
GROUP BY bucket ORDER BY bucket
""".formatted(bucketExpr, days);
List<Map<String, Object>> buckets = queryJdbcTemplate.queryForList(bucketSql);
boolean groupById = "id".equalsIgnoreCase(groupBy);
String clientColumn = groupById ? "client_id" : "client_ip";
String topClientsSql = """
SELECT %s AS client, COUNT(*) AS query_count,
COALESCE(AVG(elapsed_ms), 0) AS avg_elapsed_ms
FROM signal.t_query_metrics
WHERE created_at >= NOW() - INTERVAL '%d days'
AND %s IS NOT NULL
GROUP BY %s
ORDER BY query_count DESC LIMIT 10
""".formatted(clientColumn, days, clientColumn, clientColumn);
List<Map<String, Object>> topClients = queryJdbcTemplate.queryForList(topClientsSql);
Map<String, Object> result = new LinkedHashMap<>();
result.put("buckets", buckets);
result.put("topClients", topClients);
result.put("granularity", granularity);
result.put("groupBy", groupById ? "id" : "ip");
return result;
}
}

파일 보기

@ -0,0 +1,153 @@
package gc.mda.signal_batch.monitoring.service;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Service;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ConcurrentLinkedQueue;
/**
* 쿼리 메트릭 벌크 INSERT 버퍼 서비스
*
* ConcurrentLinkedQueue로 lock-free 수집 10초 간격으로 batchUpdate.
* 1요청 = 1레코드 보장: WebSocket은 쿼리 완료 1회, REST는 호출당 1회 enqueue.
*/
@Slf4j
@Service
public class QueryMetricsBufferService {
private static final int MAX_FLUSH_SIZE = 500;
private static final String INSERT_SQL = """
INSERT INTO signal.t_query_metrics (
query_id, session_id, query_type, created_at,
start_time, end_time, zoom_level, viewport_bounds, requested_mmsi,
data_path, cache_hit_days, db_query_days, db_conn_total,
unique_vessels, total_tracks, total_points, points_after_simplify,
total_chunks, response_bytes,
elapsed_ms, db_query_ms, simplify_ms, backpressure_events,
status, client_ip, client_id
) VALUES (
?, ?, ?, now(),
?, ?, ?, ?, ?,
?, ?, ?, ?,
?, ?, ?, ?,
?, ?,
?, ?, ?, ?,
?, ?, ?
)
""";
private final JdbcTemplate queryJdbcTemplate;
private final ConcurrentLinkedQueue<QueryMetricsService.QueryMetric> buffer = new ConcurrentLinkedQueue<>();
public QueryMetricsBufferService(
@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryJdbcTemplate = queryJdbcTemplate;
}
@PostConstruct
void ensureClientIpColumn() {
try {
queryJdbcTemplate.execute("""
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'signal' AND table_name = 't_query_metrics' AND column_name = 'client_ip'
) THEN
ALTER TABLE signal.t_query_metrics ADD COLUMN client_ip VARCHAR(45);
CREATE INDEX IF NOT EXISTS idx_query_metrics_client_ip ON signal.t_query_metrics(client_ip, created_at);
END IF;
END $$
""");
log.info("t_query_metrics client_ip column ensured");
} catch (Exception e) {
log.warn("Failed to ensure client_ip column: {}", e.getMessage());
}
}
@PostConstruct
void ensureClientIdColumn() {
try {
queryJdbcTemplate.execute("""
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'signal' AND table_name = 't_query_metrics' AND column_name = 'client_id'
) THEN
ALTER TABLE signal.t_query_metrics ADD COLUMN client_id VARCHAR(100);
CREATE INDEX IF NOT EXISTS idx_query_metrics_client_id ON signal.t_query_metrics(client_id, created_at);
END IF;
END $$
""");
log.info("t_query_metrics client_id column ensured");
} catch (Exception e) {
log.warn("Failed to ensure client_id column: {}", e.getMessage());
}
}
/**
* 메트릭 레코드를 버퍼에 추가 (lock-free)
*/
public void enqueue(QueryMetricsService.QueryMetric metric) {
if (metric == null) return;
buffer.offer(metric);
}
/**
* 10초 간격으로 버퍼 flush batchUpdate
*/
@Scheduled(fixedDelay = 10_000)
public void flush() {
if (buffer.isEmpty()) return;
List<QueryMetricsService.QueryMetric> batch = new ArrayList<>(MAX_FLUSH_SIZE);
QueryMetricsService.QueryMetric metric;
while (batch.size() < MAX_FLUSH_SIZE && (metric = buffer.poll()) != null) {
batch.add(metric);
}
if (batch.isEmpty()) return;
try {
List<Object[]> args = batch.stream()
.map(this::toArgs)
.toList();
queryJdbcTemplate.batchUpdate(INSERT_SQL, args);
log.debug("Flushed {} query metrics to DB (remaining: {})", batch.size(), buffer.size());
} catch (Exception e) {
log.warn("Failed to flush query metrics ({} records): {}", batch.size(), e.getMessage());
}
}
private Object[] toArgs(QueryMetricsService.QueryMetric m) {
return new Object[]{
m.getQueryId(), m.getSessionId(), m.getQueryType(),
m.getStartTime() != null ? Timestamp.valueOf(m.getStartTime()) : null,
m.getEndTime() != null ? Timestamp.valueOf(m.getEndTime()) : null,
m.getZoomLevel(), m.getViewportBounds(), m.getRequestedMmsi(),
m.getDataPath(), m.getCacheHitDays(), m.getDbQueryDays(), m.getDbConnTotal(),
m.getUniqueVessels(), m.getTotalTracks(), m.getTotalPoints(), m.getPointsAfterSimplify(),
m.getTotalChunks(), m.getResponseBytes(),
m.getElapsedMs(), m.getDbQueryMs(), m.getSimplifyMs(), m.getBackpressureEvents(),
m.getStatus(), m.getClientIp(), m.getClientId()
};
}
/**
* 현재 버퍼 크기 (모니터링용)
*/
public int getBufferSize() {
return buffer.size();
}
}

파일 보기

@ -0,0 +1,136 @@
package gc.mda.signal_batch.monitoring.service;
import lombok.Builder;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Service;
import java.time.LocalDateTime;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
/**
* 쿼리 실행 메트릭 조회 서비스
*
* 적재는 QueryMetricsBufferService가 담당 (ConcurrentLinkedQueue + 10초 batch flush).
* 서비스는 조회 전용 + QueryMetric DTO 정의.
*/
@Slf4j
@Service
public class QueryMetricsService {
private final JdbcTemplate queryJdbcTemplate;
public QueryMetricsService(@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryJdbcTemplate = queryJdbcTemplate;
}
/**
* 최근 쿼리 메트릭 조회
*/
public List<Map<String, Object>> getRecentMetrics(int limit) {
return queryJdbcTemplate.queryForList("""
SELECT query_id, session_id, query_type, created_at,
start_time, end_time, zoom_level, viewport_bounds,
data_path, cache_hit_days, db_query_days, db_conn_total,
unique_vessels, total_tracks, total_points, points_after_simplify,
total_chunks, response_bytes,
elapsed_ms, db_query_ms, simplify_ms, backpressure_events, status
FROM signal.t_query_metrics
ORDER BY created_at DESC
LIMIT ?
""", limit);
}
/**
* 기간별 쿼리 메트릭 통계
*/
public Map<String, Object> getStats(int days) {
Map<String, Object> stats = new LinkedHashMap<>();
// 전체 통계
Map<String, Object> summary = queryJdbcTemplate.queryForMap("""
SELECT
COUNT(*) AS total_queries,
ROUND(AVG(elapsed_ms)) AS avg_elapsed_ms,
MAX(elapsed_ms) AS max_elapsed_ms,
ROUND(AVG(unique_vessels)) AS avg_vessels,
ROUND(AVG(total_points)) AS avg_points,
SUM(CASE WHEN data_path = 'CACHE' THEN 1 ELSE 0 END) AS cache_only,
SUM(CASE WHEN data_path = 'HYBRID' THEN 1 ELSE 0 END) AS hybrid,
SUM(CASE WHEN data_path = 'DB' THEN 1 ELSE 0 END) AS db_only,
SUM(CASE WHEN status = 'COMPLETED' THEN 1 ELSE 0 END) AS completed,
SUM(CASE WHEN status = 'CANCELLED' THEN 1 ELSE 0 END) AS cancelled,
SUM(CASE WHEN status = 'ERROR' THEN 1 ELSE 0 END) AS errors,
SUM(CASE WHEN status = 'TIMEOUT' THEN 1 ELSE 0 END) AS timeouts
FROM signal.t_query_metrics
WHERE created_at >= now() - INTERVAL '%d days'
""".formatted(days));
stats.put("summary", summary);
// 일별 추이
List<Map<String, Object>> daily = queryJdbcTemplate.queryForList("""
SELECT
DATE(created_at) AS date,
COUNT(*) AS query_count,
ROUND(AVG(elapsed_ms)) AS avg_elapsed_ms,
ROUND(AVG(unique_vessels)) AS avg_vessels,
SUM(CASE WHEN status = 'COMPLETED' THEN 1 ELSE 0 END) AS completed,
SUM(CASE WHEN status != 'COMPLETED' THEN 1 ELSE 0 END) AS failed
FROM signal.t_query_metrics
WHERE created_at >= now() - INTERVAL '%d days'
GROUP BY DATE(created_at)
ORDER BY date DESC
""".formatted(days));
stats.put("dailyTrend", daily);
// 느린 쿼리 TOP 10
List<Map<String, Object>> slowQueries = queryJdbcTemplate.queryForList("""
SELECT query_id, created_at, elapsed_ms, unique_vessels, total_points,
data_path, db_conn_total, zoom_level, status
FROM signal.t_query_metrics
WHERE created_at >= now() - INTERVAL '%d days'
ORDER BY elapsed_ms DESC
LIMIT 10
""".formatted(days));
stats.put("slowQueries", slowQueries);
return stats;
}
/**
* 쿼리 메트릭 데이터 클래스
*/
@Getter
@Builder
public static class QueryMetric {
private final String queryId;
private final String sessionId;
private final String queryType;
private final LocalDateTime startTime;
private final LocalDateTime endTime;
private final Integer zoomLevel;
private final String viewportBounds;
private final int requestedMmsi;
private final String dataPath;
private final int cacheHitDays;
private final int dbQueryDays;
private final int dbConnTotal;
private final int uniqueVessels;
private final int totalTracks;
private final int totalPoints;
private final int pointsAfterSimplify;
private final int totalChunks;
private final long responseBytes;
private final long elapsedMs;
private final long dbQueryMs;
private final long simplifyMs;
private final int backpressureEvents;
private final String status;
private final String clientIp;
private final String clientId;
}
}

파일 보기

@ -48,7 +48,7 @@ spring:
validation-timeout: 5000 validation-timeout: 5000
leak-detection-threshold: 60000 # 커넥션 누수 감지 (60초) leak-detection-threshold: 60000 # 커넥션 누수 감지 (60초)
# PostGIS 함수를 위해 public 스키마를 search_path에 명시적으로 추가 # PostGIS 함수를 위해 public 스키마를 search_path에 명시적으로 추가
connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public, pg_catalog;" connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public, pg_catalog; SET work_mem = '256MB'; SET synchronous_commit = 'off';"
statement-cache-size: 250 statement-cache-size: 250
data-source-properties: data-source-properties:
prepareThreshold: 3 prepareThreshold: 3
@ -68,7 +68,7 @@ spring:
idle-timeout: 600000 idle-timeout: 600000
max-lifetime: 1800000 max-lifetime: 1800000
leak-detection-threshold: 60000 # 커넥션 누수 감지 (60초) leak-detection-threshold: 60000 # 커넥션 누수 감지 (60초)
connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public;" connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public; SET synchronous_commit = 'off';"
# Request 크기 설정 # Request 크기 설정
servlet: servlet:
@ -87,19 +87,12 @@ spring:
logging: logging:
level: level:
root: INFO root: INFO
gc.mda.signal_batch: DEBUG gc.mda.signal_batch: INFO
gc.mda.signal_batch.global.util: INFO gc.mda.signal_batch.monitoring: INFO
gc.mda.signal_batch.global.websocket.service: INFO org.springframework.batch: WARN
gc.mda.signal_batch.batch.writer: INFO
gc.mda.signal_batch.batch.reader: INFO
gc.mda.signal_batch.batch.processor: INFO
gc.mda.signal_batch.domain: INFO
gc.mda.signal_batch.monitoring: DEBUG
gc.mda.signal_batch.monitoring.controller: INFO
org.springframework.batch: INFO
org.springframework.jdbc: WARN org.springframework.jdbc: WARN
org.postgresql: WARN org.postgresql: WARN
com.zaxxer.hikari: INFO com.zaxxer.hikari: WARN
# 개발 환경 배치 설정 (성능 최적화) # 개발 환경 배치 설정 (성능 최적화)
vessel: # spring 하위가 아닌 최상위 레벨 vessel: # spring 하위가 아닌 최상위 레벨
@ -180,6 +173,7 @@ vessel: # spring 하위가 아닌 최상위 레벨
# 궤적 비정상 검출 설정 # 궤적 비정상 검출 설정
track: track:
include-abnormal-in-tracks: true # 비정상 궤적도 정상 테이블+캐시에 포함 (강화학습 데이터 수집용)
abnormal-detection: abnormal-detection:
large-gap-threshold-hours: 4 # 이 시간 이상 gap은 연결 안함 large-gap-threshold-hours: 4 # 이 시간 이상 gap은 연결 안함
extreme-speed-threshold: 1000 # 이 속도 이상은 무조건 비정상 (knots) extreme-speed-threshold: 1000 # 이 속도 이상은 무조건 비정상 (knots)
@ -211,6 +205,9 @@ vessel: # spring 하위가 아닌 최상위 레벨
max-size: 60000 # 최대 60,000척 max-size: 60000 # 최대 60,000척
refresh-interval-minutes: 2 # 2분치 데이터 조회 (수집 지연 고려) refresh-interval-minutes: 2 # 2분치 데이터 조회 (수집 지연 고려)
# L2 HourlyTrackCache 간소화 (운영 환경 활성화)
hourly-simplification:
enabled: true # 운영 환경: 활성화
# 비정상 궤적 검출 설정 (개선됨) # 비정상 궤적 검출 설정 (개선됨)
abnormal-detection: abnormal-detection:
@ -264,8 +261,10 @@ vessel: # spring 하위가 아닌 최상위 레벨
retention-days: 60 # 구역별 선박 항적: 60일 retention-days: 60 # 구역별 선박 항적: 60일
t_grid_vessel_tracks: t_grid_vessel_tracks:
retention-days: 30 # 해구별 선박 항적: 30일 retention-days: 30 # 해구별 선박 항적: 30일
t_vessel_tracks_daily:
retention-months: 0 # 일별 항적: 영구 보관
t_abnormal_tracks: t_abnormal_tracks:
retention-months: 0 # 비정상 항적: 무한 보관 retention-months: 0 # 비정상 항적: 영구 보관
# S&P AIS API 캐시 TTL (운영: 120분) # S&P AIS API 캐시 TTL (운영: 120분)
app: app:
@ -273,17 +272,29 @@ app:
ais-target: ais-target:
ttl-minutes: 120 ttl-minutes: 120
ais-api: ais-api:
username: 7cc0517d-5ed6-452e-a06f-5bbfd6ab6ade username: 86b30c84-5d17-41ac-8c4f-2aa20d791114
password: 2LLzSJNqtxWVD8zC password: KHZQVc2tMBGtNxvG
# 일일 항적 데이터 인메모리 캐시 # 일일 항적 데이터 인메모리 캐시
cache: cache:
daily-track: daily-track:
enabled: true enabled: true
retention-days: 7 # D-1 ~ D-7 (오늘 제외) retention-days: 14 # D-1 ~ D-14 (2주, DP 간소화로 메모리 절감)
max-memory-gb: 6 # 최대 6GB (일 평균 ~720MB × 7일 = ~5GB) max-memory-gb: 10 # 최대 10GB (DP 간소화 후 일 ~400MB × 14일 ≈ 6GB + 여유)
warmup-async: true # 비동기 워밍업 (서버 시작 차단 없음) warmup-async: true # 비동기 워밍업 (서버 시작 차단 없음)
# 항적 데이터 메모리 예산 (64GB JVM 기준)
track:
memory-budget:
total-budget-gb: 64 # 전체 JVM 힙
cache-budget-gb: 35 # L1+L2+L3 캐시 (L3 5GB + L2 ~14GB + L1 ~3GB + 여유 13GB)
query-budget-gb: 20 # REST/WebSocket 동시 쿼리 (동시 60쿼리 × ~300MB)
max-single-query-gb: 5 # 단일 쿼리 상한
estimation-correction-factor: 1.8 # 실측 기반 보정 계수
queue-timeout-seconds: 60
warning-threshold: 0.8
critical-threshold: 0.95
# WebSocket 부하 제어 설정 # WebSocket 부하 제어 설정
websocket: websocket:
query: query:

파일 보기

@ -159,6 +159,8 @@ vessel:
page-size: ${BATCH_PAGE_SIZE:10000} page-size: ${BATCH_PAGE_SIZE:10000}
partition-size: ${BATCH_PARTITION_SIZE:24} partition-size: ${BATCH_PARTITION_SIZE:24}
skip-limit: 100 skip-limit: 100
track:
include-abnormal-in-tracks: false # true: 비정상 궤적도 정상 테이블+캐시에 포함 (강화학습 데이터 수집용)
retry-limit: 3 retry-limit: 3
# Reader 설정 # Reader 설정
use-cursor-reader: true # Cursor Reader 사용 여부 use-cursor-reader: true # Cursor Reader 사용 여부
@ -272,6 +274,13 @@ vessel:
ttl-minutes: 120 # 캐시 TTL: 120분 (위성 AIS 30~60분 간격 고려) ttl-minutes: 120 # 캐시 TTL: 120분 (위성 AIS 30~60분 간격 고려)
max-size: 100000 # 최대 선박 수: 100,000척 (2시간 누적 고려) max-size: 100000 # 최대 선박 수: 100,000척 (2시간 누적 고려)
# L2 HourlyTrackCache 간소화 설정
hourly-simplification:
enabled: false # 기본값: 비활성화 (프로파일별로 활성화)
cron: "0 30 6,12,18 * * *" # 06:30, 12:30, 18:30 실행
hours-ago: 6 # 6시간 이상 경과 엔트리 대상
sample-rate: 2 # 매 2번째 포인트만 유지 (~50% 감소)
# ==================== S&P Global AIS API 설정 ==================== # ==================== S&P Global AIS API 설정 ====================
app: app:
ais-api: ais-api:
@ -284,7 +293,7 @@ app:
cache: cache:
ais-target: ais-target:
ttl-minutes: 120 # 기본 TTL (프로파일별 오버라이드) ttl-minutes: 120 # 기본 TTL (프로파일별 오버라이드)
max-size: 300000 # 최대 캐시 크기 (30만 건) max-size: 500000 # 최대 캐시 크기 (50만 건)
five-min-track: five-min-track:
ttl-minutes: 75 # TTL 75분 (1시간 + 15분 여유) ttl-minutes: 75 # TTL 75분 (1시간 + 15분 여유)
@ -301,6 +310,18 @@ app:
warmup-enabled: true warmup-enabled: true
warmup-days: 7 warmup-days: 7
# 항적 데이터 메모리 예산 (논리적 파티셔닝)
track:
memory-budget:
total-budget-gb: 64 # 전체 JVM 힙 예산
cache-budget-gb: 35 # L1/L2/L3 캐시 (55%)
query-budget-gb: 20 # REST/WebSocket 동시 쿼리 (31%)
max-single-query-gb: 5 # 단일 쿼리 상한
estimation-correction-factor: 1.8 # 메모리 추정 보정 계수
queue-timeout-seconds: 60 # 쿼리 대기 큐 타임아웃
warning-threshold: 0.8 # 예산 경고 임계값 (80%)
critical-threshold: 0.95 # 예산 위험 임계값 (95%)
# Swagger/OpenAPI 설정 # Swagger/OpenAPI 설정
springdoc: springdoc:
api-docs: api-docs:

파일 보기

@ -0,0 +1,54 @@
-- 쿼리 실행 메트릭 테이블
-- WebSocket/REST 쿼리의 성능 지표를 기록하여 ApiMetrics 페이지에서 조회
CREATE TABLE IF NOT EXISTS signal.t_query_metrics (
id BIGSERIAL PRIMARY KEY,
query_id VARCHAR(64) NOT NULL,
session_id VARCHAR(64),
query_type VARCHAR(20) NOT NULL, -- 'WEBSOCKET' | 'REST_V1' | 'REST_V2'
created_at TIMESTAMP NOT NULL DEFAULT now(),
-- 요청 파라미터
start_time TIMESTAMP,
end_time TIMESTAMP,
zoom_level INTEGER,
viewport_bounds VARCHAR(200), -- "minLon,minLat,maxLon,maxLat"
requested_mmsi INTEGER DEFAULT 0,
-- 처리 경로
data_path VARCHAR(10), -- 'CACHE' | 'DB' | 'HYBRID'
cache_hit_days INTEGER DEFAULT 0,
db_query_days INTEGER DEFAULT 0,
db_conn_total INTEGER DEFAULT 0,
-- 결과 통계
unique_vessels INTEGER DEFAULT 0,
total_tracks INTEGER DEFAULT 0,
total_points INTEGER DEFAULT 0,
points_after_simplify INTEGER DEFAULT 0,
total_chunks INTEGER DEFAULT 0,
response_bytes BIGINT DEFAULT 0,
-- 성능
elapsed_ms BIGINT DEFAULT 0,
db_query_ms BIGINT DEFAULT 0,
simplify_ms BIGINT DEFAULT 0,
backpressure_events INTEGER DEFAULT 0,
-- 결과 상태
status VARCHAR(20) DEFAULT 'COMPLETED' -- 'COMPLETED' | 'CANCELLED' | 'ERROR' | 'TIMEOUT'
);
-- client_ip 컬럼 추가 (idempotent)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'signal' AND table_name = 't_query_metrics' AND column_name = 'client_ip'
) THEN
ALTER TABLE signal.t_query_metrics ADD COLUMN client_ip VARCHAR(45);
END IF;
END $$;
CREATE INDEX IF NOT EXISTS idx_query_metrics_created ON signal.t_query_metrics(created_at);
CREATE INDEX IF NOT EXISTS idx_query_metrics_type ON signal.t_query_metrics(query_type, created_at);
CREATE INDEX IF NOT EXISTS idx_query_metrics_client_ip ON signal.t_query_metrics(client_ip, created_at);

파일 보기

@ -12,6 +12,25 @@ import static org.junit.jupiter.api.Assertions.*;
class SignalKindCodeTest { class SignalKindCodeTest {
@Nested
@DisplayName("shipName 기반 BUOY 검출")
class ShipNameBuoy {
@Test
@DisplayName("'.' 또는 '_' 2개 이상 → BUOY (vesselType 무시)")
void resolve_buoyByName() {
assertEquals("000028", SignalKindCode.resolve("Cargo", null, "BUOY_01_23").getCode());
assertEquals("000028", SignalKindCode.resolve("Tanker", null, "AIS.BUOY.01").getCode());
}
@Test
@DisplayName("'.' 또는 '_' 1개 이하 → vesselType 기준")
void resolve_notBuoyByName() {
assertEquals("000023", SignalKindCode.resolve("Cargo", null, "M.V CARGO").getCode());
assertEquals("000024", SignalKindCode.resolve("Tanker", null, "OIL_TANKER").getCode());
}
}
@Nested @Nested
@DisplayName("vesselType 단독 매칭") @DisplayName("vesselType 단독 매칭")
class VesselTypeDirect { class VesselTypeDirect {
@ -21,7 +40,7 @@ class SignalKindCodeTest {
"Cargo, 000023", "Cargo, 000023",
"Tanker, 000024", "Tanker, 000024",
"Passenger, 000022", "Passenger, 000022",
"AtoN, 000028", "AtoN, 000027",
"Law Enforcement, 000025", "Law Enforcement, 000025",
"Search and Rescue, 000021", "Search and Rescue, 000021",
"Local Vessel, 000020" "Local Vessel, 000020"
@ -38,11 +57,11 @@ class SignalKindCodeTest {
@ParameterizedTest @ParameterizedTest
@CsvSource({ @CsvSource({
"Tug, 000025",
"Pilot Boat, 000025", "Pilot Boat, 000025",
"Tender, 000025",
"Anti Pollution, 000025", "Anti Pollution, 000025",
"Medical Transport, 000025", "Medical Transport, 000025",
"Tug, 000027",
"Tender, 000027",
"High Speed Craft, 000022", "High Speed Craft, 000022",
"Wing in Ground-effect, 000022" "Wing in Ground-effect, 000022"
}) })
@ -60,13 +79,13 @@ class SignalKindCodeTest {
@CsvSource({ @CsvSource({
"Vessel, Fishing, 000020", "Vessel, Fishing, 000020",
"Vessel, Military Operations, 000025", "Vessel, Military Operations, 000025",
"Vessel, Towing, 000025", "Vessel, Towing, 000027",
"Vessel, Towing (Large), 000025", "Vessel, Towing (Large), 000027",
"Vessel, Dredging/Underwater Ops, 000025", "Vessel, Dredging/Underwater Ops, 000027",
"Vessel, Diving Operations, 000025", "Vessel, Diving Operations, 000027",
"Vessel, Pleasure Craft, 000020", "Vessel, Pleasure Craft, 000027",
"Vessel, Sailing, 000020", "Vessel, Sailing, 000027",
"Vessel, N/A, 000020", "Vessel, N/A, 000027",
"Vessel, Hazardous Cat A, 000023", "Vessel, Hazardous Cat A, 000023",
"Vessel, Hazardous Cat B, 000023", "Vessel, Hazardous Cat B, 000023",
"Vessel, Unknown, 000027" "Vessel, Unknown, 000027"

파일 보기

@ -14,6 +14,34 @@ import static org.assertj.core.api.Assertions.assertThat;
@DisplayName("SignalKindCode - MDA 선종 범례코드 치환") @DisplayName("SignalKindCode - MDA 선종 범례코드 치환")
class SignalKindCodeTest { class SignalKindCodeTest {
@Nested
@DisplayName("shipName 기반 BUOY 검출 (최우선)")
class ShipNameBuoy {
@ParameterizedTest(name = "shipName={0} → BUOY")
@ValueSource(strings = {"BUOY_01_23", "AIS.BUOY.01", "LIGHT__HOUSE", "A.B.C"})
@DisplayName("'.' 또는 '_' 2개 이상 → BUOY")
void resolve_buoyByName(String shipName) {
assertThat(SignalKindCode.resolve("Cargo", null, shipName))
.isEqualTo(SignalKindCode.BUOY);
}
@ParameterizedTest(name = "shipName={0} → vesselType 기준")
@ValueSource(strings = {"M.V CARGO", "SHIP_ONE", "NORMAL SHIP", "ABC"})
@DisplayName("'.' 또는 '_' 1개 이하 → shipName 무시, vesselType 기준")
void resolve_notBuoyByName(String shipName) {
assertThat(SignalKindCode.resolve("Cargo", null, shipName))
.isEqualTo(SignalKindCode.CARGO);
}
@Test
@DisplayName("shipName null → vesselType 기준")
void resolve_nullShipName() {
assertThat(SignalKindCode.resolve("Cargo", null, null))
.isEqualTo(SignalKindCode.CARGO);
}
}
@Nested @Nested
@DisplayName("vesselType 단독 매칭") @DisplayName("vesselType 단독 매칭")
class VesselTypeDirect { class VesselTypeDirect {
@ -23,7 +51,6 @@ class SignalKindCodeTest {
"cargo, CARGO", "cargo, CARGO",
"tanker, TANKER", "tanker, TANKER",
"passenger, FERRY", "passenger, FERRY",
"aton, BUOY",
"law enforcement, GOV", "law enforcement, GOV",
"search and rescue, KCGV", "search and rescue, KCGV",
"local vessel, FISHING" "local vessel, FISHING"
@ -33,6 +60,12 @@ class SignalKindCodeTest {
SignalKindCode result = SignalKindCode.resolve(vesselType, null); SignalKindCode result = SignalKindCode.resolve(vesselType, null);
assertThat(result.name()).isEqualTo(expectedName); assertThat(result.name()).isEqualTo(expectedName);
} }
@Test
@DisplayName("aton → DEFAULT (부이가 아닌 일반 장비)")
void resolve_aton() {
assertThat(SignalKindCode.resolve("aton", null)).isEqualTo(SignalKindCode.DEFAULT);
}
} }
@Nested @Nested
@ -40,12 +73,19 @@ class SignalKindCodeTest {
class VesselTypeGroup { class VesselTypeGroup {
@ParameterizedTest(name = "vesselType={0} → GOV") @ParameterizedTest(name = "vesselType={0} → GOV")
@ValueSource(strings = {"tug", "pilot boat", "tender", "anti pollution", "medical transport"}) @ValueSource(strings = {"pilot boat", "anti pollution", "medical transport"})
@DisplayName("GOV 그룹 매칭") @DisplayName("GOV 그룹 매칭")
void resolve_govGroup(String vesselType) { void resolve_govGroup(String vesselType) {
assertThat(SignalKindCode.resolve(vesselType, null)).isEqualTo(SignalKindCode.GOV); assertThat(SignalKindCode.resolve(vesselType, null)).isEqualTo(SignalKindCode.GOV);
} }
@ParameterizedTest(name = "vesselType={0} → DEFAULT")
@ValueSource(strings = {"tug", "tender"})
@DisplayName("tug, tender → DEFAULT")
void resolve_tugTenderDefault(String vesselType) {
assertThat(SignalKindCode.resolve(vesselType, null)).isEqualTo(SignalKindCode.DEFAULT);
}
@ParameterizedTest(name = "vesselType={0} → FERRY") @ParameterizedTest(name = "vesselType={0} → FERRY")
@ValueSource(strings = {"high speed craft", "wing in ground-effect"}) @ValueSource(strings = {"high speed craft", "wing in ground-effect"})
@DisplayName("FERRY 그룹 매칭") @DisplayName("FERRY 그룹 매칭")
@ -70,18 +110,18 @@ class SignalKindCodeTest {
assertThat(SignalKindCode.resolve("Vessel", "Military Operations")).isEqualTo(SignalKindCode.GOV); assertThat(SignalKindCode.resolve("Vessel", "Military Operations")).isEqualTo(SignalKindCode.GOV);
} }
@ParameterizedTest(name = "Vessel + {0} → GOV") @ParameterizedTest(name = "Vessel + {0} → DEFAULT")
@ValueSource(strings = {"towing", "towing (large)", "dredging/underwater ops", "diving operations"}) @ValueSource(strings = {"towing", "towing (large)", "dredging/underwater ops", "diving operations"})
@DisplayName("Vessel + 해양작업 → GOV") @DisplayName("Vessel + 해양작업 → DEFAULT")
void resolve_vesselMarineOps(String extraInfo) { void resolve_vesselMarineOps(String extraInfo) {
assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.GOV); assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.DEFAULT);
} }
@ParameterizedTest(name = "Vessel + {0} → FISHING") @ParameterizedTest(name = "Vessel + {0} → DEFAULT")
@ValueSource(strings = {"pleasure craft", "sailing", "n/a"}) @ValueSource(strings = {"pleasure craft", "sailing", "n/a"})
@DisplayName("Vessel + 레저/기타 → FISHING") @DisplayName("Vessel + 레저/기타 → DEFAULT")
void resolve_vesselLeisure(String extraInfo) { void resolve_vesselLeisure(String extraInfo) {
assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.FISHING); assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.DEFAULT);
} }
@Test @Test
@ -164,4 +204,32 @@ class SignalKindCodeTest {
assertThat(SignalKindCode.BUOY.getCode()).isEqualTo("000028"); assertThat(SignalKindCode.BUOY.getCode()).isEqualTo("000028");
} }
} }
@Nested
@DisplayName("shipName BUOY 판정 (resolve 3-param 통합 검증)")
class BuoyNamePattern {
@ParameterizedTest(name = "{0} → BUOY")
@ValueSource(strings = {"A.B.C", "BUOY_01_02", "._", "A.B_C"})
@DisplayName("2개 이상 특수문자 → BUOY")
void resolve_buoyPattern(String name) {
// vesselType과 무관하게 BUOY로 치환
assertThat(SignalKindCode.resolve(null, null, name)).isEqualTo(SignalKindCode.BUOY);
}
@ParameterizedTest(name = "{0} → not BUOY")
@ValueSource(strings = {"ABC", "A.B", "A_B", "NORMAL"})
@DisplayName("1개 이하 특수문자 → shipName 무시")
void resolve_notBuoyPattern(String name) {
assertThat(SignalKindCode.resolve(null, null, name)).isEqualTo(SignalKindCode.DEFAULT);
}
@Test
@DisplayName("null/blank shipName → vesselType 기준")
void resolve_nullBlankName() {
assertThat(SignalKindCode.resolve("Cargo", null, null)).isEqualTo(SignalKindCode.CARGO);
assertThat(SignalKindCode.resolve("Cargo", null, "")).isEqualTo(SignalKindCode.CARGO);
assertThat(SignalKindCode.resolve("Cargo", null, " ")).isEqualTo(SignalKindCode.CARGO);
}
}
} }