Compare commits

..

No commits in common. "main" and "feature/dashboard-phase-1" have entirely different histories.

68개의 변경된 파일1446개의 추가작업 그리고 4384개의 파일을 삭제

파일 보기

@ -44,7 +44,21 @@
- `@Builder` 허용
- `@Data` 사용 금지 (명시적으로 필요한 어노테이션만)
- `@AllArgsConstructor` 단독 사용 금지 (`@Builder`와 함께 사용)
- `@Slf4j` 로거 사용
## 로깅
- `@Slf4j` (Lombok) 로거 사용
- SLF4J `{}` 플레이스홀더에 printf 포맷 사용 금지 (`{:.1f}`, `{:d}`, `{%s}` 등)
- 숫자 포맷이 필요하면 `String.format()`으로 변환 후 전달
```java
// 잘못됨
log.info("처리율: {:.1f}%", rate);
// 올바름
log.info("처리율: {}%", String.format("%.1f", rate));
```
- 예외 로깅 시 예외 객체는 마지막 인자로 전달 (플레이스홀더 불필요)
```java
log.error("처리 실패: {}", id, exception);
```
## 예외 처리
- 비즈니스 예외는 커스텀 Exception 클래스 정의

파일 보기

@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"env": {
"CLAUDE_BOT_TOKEN": "ac15488ad66463bd5c4e3be1fa6dd5b2743813c5"
"CLAUDE_BOT_TOKEN": "4804f9f63e799e25d9a8b381e89c8bff11471b7a"
},
"permissions": {
"allow": [

파일 보기

@ -46,94 +46,72 @@ curl -sf "${GITEA_URL}/gc/template-react-ts/raw/branch/develop/.editorconfig"
### 3. .claude/ 디렉토리 구성
이미 팀 표준 파일이 존재하면 건너뜀. 없는 경우 위의 URL 패턴으로 Gitea에서 다운로드:
- `.claude/settings.json` — 프로젝트 타입별 표준 권한 설정 + env(CLAUDE_BOT_TOKEN 등) + hooks 섹션 (4단계 참조)
- `.claude/rules/` — 팀 규칙 파일 (team-policy, git-workflow, code-style, naming, testing)
- `.claude/skills/` — 팀 스킬 (create-mr, fix-issue, sync-team-workflow, init-project)
⚠️ 팀 규칙(.claude/rules/), 에이전트(.claude/agents/), 스킬 6종, 스크립트는 12단계(sync-team-workflow)에서 자동 다운로드된다. 여기서는 settings.json만 설정한다.
### 4. Hook 스크립트 생성
`.claude/scripts/` 디렉토리를 생성하고 다음 스크립트 파일 생성 (chmod +x):
### 3.5. Gitea 토큰 설정
**CLAUDE_BOT_TOKEN** (팀 공용): `settings.json``env` 필드에 이미 포함되어 있음 (3단계에서 설정됨). 별도 조치 불필요.
**GITEA_TOKEN** (개인): `/push`, `/mr`, `/release` 등 Git 스킬에 필요한 개인 토큰.
- `.claude/scripts/on-pre-compact.sh`:
```bash
# 현재 GITEA_TOKEN 설정 여부 확인
if [ -z "$GITEA_TOKEN" ]; then
echo "GITEA_TOKEN 미설정"
#!/bin/bash
# PreCompact hook: systemMessage만 지원 (hookSpecificOutput 사용 불가)
INPUT=$(cat)
cat <<RESP
{
"systemMessage": "컨텍스트 압축이 시작됩니다. 반드시 다음을 수행하세요:\n\n1. memory/MEMORY.md - 핵심 작업 상태 갱신 (200줄 이내)\n2. memory/project-snapshot.md - 변경된 패키지/타입 정보 업데이트\n3. memory/project-history.md - 이번 세션 변경사항 추가\n4. memory/api-types.md - API 인터페이스 변경이 있었다면 갱신\n5. 미완료 작업이 있다면 TodoWrite에 남기고 memory에도 기록"
}
RESP
```
- `.claude/scripts/on-post-compact.sh`:
```bash
#!/bin/bash
INPUT=$(cat)
CWD=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('cwd',''))" 2>/dev/null || echo "")
if [ -z "$CWD" ]; then
CWD=$(pwd)
fi
PROJECT_HASH=$(echo "$CWD" | sed 's|/|-|g')
MEMORY_DIR="$HOME/.claude/projects/$PROJECT_HASH/memory"
CONTEXT=""
if [ -f "$MEMORY_DIR/MEMORY.md" ]; then
SUMMARY=$(head -100 "$MEMORY_DIR/MEMORY.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="컨텍스트가 압축되었습니다.\\n\\n[세션 요약]\\n${SUMMARY}"
fi
if [ -f "$MEMORY_DIR/project-snapshot.md" ]; then
SNAP=$(head -50 "$MEMORY_DIR/project-snapshot.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="${CONTEXT}\\n\\n[프로젝트 최신 상태]\\n${SNAP}"
fi
if [ -n "$CONTEXT" ]; then
CONTEXT="${CONTEXT}\\n\\n위 내용을 참고하여 작업을 이어가세요. 상세 내용은 memory/ 디렉토리의 각 파일을 참조하세요."
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"${CONTEXT}\"}}"
else
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"컨텍스트가 압축되었습니다. memory 파일이 없으므로 사용자에게 이전 작업 내용을 확인하세요.\"}}"
fi
```
**GITEA_TOKEN이 없는 경우**, 다음 안내를 **AskUserQuestion**으로 표시:
- `.claude/scripts/on-commit.sh`:
**질문**: "GITEA_TOKEN이 설정되지 않았습니다. Gitea 개인 토큰을 생성하시겠습니까?"
- 옵션 1: 토큰 생성 안내 보기 (추천)
- 옵션 2: 이미 있음 (토큰 입력)
- 옵션 3: 나중에 하기
**토큰 생성 안내 선택 시**, 다음 내용을 표시:
```
📋 Gitea 토큰 생성 방법:
1. 브라우저에서 접속:
https://gitea.gc-si.dev/user/settings/applications
2. "Manage Access Tokens" 섹션에서 "Generate New Token" 클릭
3. 입력:
- Token Name: "claude-code" (자유롭게 지정)
- Repository and Organization Access: ✅ All (public, private, and limited)
4. Select permissions (아래 4개만 설정, 나머지는 No Access 유지):
┌─────────────────┬──────────────────┬──────────────────────────────┐
│ 항목 │ 권한 │ 용도 │
├─────────────────┼──────────────────┼──────────────────────────────┤
│ issue │ Read and Write │ /fix-issue 이슈 조회/코멘트 │
│ organization │ Read │ gc 조직 리포 접근 │
│ repository │ Read and Write │ /push, /mr, /release API 호출 │
│ user │ Read │ API 사용자 인증 확인 │
└─────────────────┴──────────────────┴──────────────────────────────┘
5. "Generate Token" 클릭 → ⚠️ 토큰이 한 번만 표시됩니다! 반드시 복사하세요.
```
표시 후 **AskUserQuestion**: "생성한 토큰을 입력하세요"
- 옵션 1: 토큰 입력 (Other로 입력)
- 옵션 2: 나중에 하기
**토큰 입력 시**:
1. Gitea API로 유효성 검증:
```bash
curl -sf "https://gitea.gc-si.dev/api/v1/user" \
-H "Authorization: token <입력된 토큰>"
```
- 성공: `✅ <login> (<full_name>) 인증 확인` 출력
- 실패: `❌ 토큰이 유효하지 않습니다. 다시 확인해주세요.` 출력 → 재입력 요청
2. `.claude/settings.local.json`에 저장 (이 파일은 .gitignore에 포함, 리포 커밋 안됨):
```json
#!/bin/bash
INPUT=$(cat)
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('tool_input',{}).get('command',''))" 2>/dev/null || echo "")
if echo "$COMMAND" | grep -qE 'git commit'; then
cat <<RESP
{
"env": {
"GITEA_TOKEN": "<입력된 토큰>"
"hookSpecificOutput": {
"additionalContext": "커밋이 감지되었습니다. 다음을 수행하세요:\n1. docs/CHANGELOG.md에 변경 내역 추가\n2. memory/project-snapshot.md에서 변경된 부분 업데이트\n3. memory/project-history.md에 이번 변경사항 추가\n4. API 인터페이스 변경 시 memory/api-types.md 갱신\n5. 프로젝트에 lint 설정이 있다면 lint 결과를 확인하고 문제를 수정"
}
}
RESP
else
echo '{}'
fi
```
기존 `settings.local.json`이 있으면 `env.GITEA_TOKEN`만 추가/갱신.
**나중에 하기 선택 시**: 경고 표시 후 다음 단계로 진행:
```
⚠️ GITEA_TOKEN 없이는 /push, /mr, /release 스킬을 사용할 수 없습니다.
나중에 토큰을 생성하면 .claude/settings.local.json에 다음을 추가하세요:
{ "env": { "GITEA_TOKEN": "your-token-here" } }
```
### 4. Hook 스크립트 설정
⚠️ `.claude/scripts/` 스크립트 파일은 12단계(sync-team-workflow)에서 서버로부터 자동 다운로드된다.
여기서는 `settings.json`에 hooks 섹션만 설정한다.
`.claude/settings.json`에 hooks 섹션이 없으면 추가 (기존 settings.json의 내용에 병합):
```json
@ -221,20 +199,6 @@ chmod +x .githooks/*
*.local
```
**팀 워크플로우 관리 경로** (sync로 생성/관리되는 파일, 리포에 커밋하지 않음):
```
# Team workflow (managed by /sync-team-workflow)
.claude/rules/
.claude/agents/
.claude/skills/push/
.claude/skills/mr/
.claude/skills/create-mr/
.claude/skills/release/
.claude/skills/version/
.claude/skills/fix-issue/
.claude/scripts/
```
### 8. Git exclude 설정
`.git/info/exclude` 파일을 읽고, 기존 내용을 보존하면서 하단에 추가:
@ -278,14 +242,7 @@ curl -sf --max-time 5 "https://gitea.gc-si.dev/gc/template-common/raw/branch/dev
}
```
### 12. 팀 워크플로우 최신화
`/sync-team-workflow`를 자동으로 1회 실행하여 최신 팀 파일(rules, agents, skills 6종, scripts, hooks)을 서버에서 다운로드하고 로컬에 적용한다.
이 단계에서 `.claude/rules/`, `.claude/agents/`, `.claude/skills/push/` 등 팀 관리 파일이 생성된다.
(이 파일들은 7단계에서 .gitignore에 추가되었으므로 리포에 커밋되지 않음)
### 13. 검증 및 요약
### 12. 검증 및 요약
- 생성/수정된 파일 목록 출력
- `git config core.hooksPath` 확인
- 빌드 명령 실행 가능 확인

파일 보기

@ -30,43 +30,6 @@ CAN_PUSH=$(echo "$PERMISSIONS" | python3 -c "import sys,json; print(json.load(sy
- `CAN_PUSH``False`이면: "MR 생성 권한이 필요합니다. 프로젝트 관리자에게 요청하세요." 안내 후 종료
### 0.5. 팀 워크플로우 최신화 확인
`.claude/workflow-version.json`이 존재하지 않으면 이 단계를 건너뛴다 (팀 프로젝트가 아닌 경우).
```bash
# 로컬 설정 읽기
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null)
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null)
CUSTOM_PRECOMMIT=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('custom_pre_commit', False))" 2>/dev/null)
# 서버 해시 조회 (custom_pre_commit이면 pre-commit 제외 해시 사용)
SERVER_VER=$(curl -sf --max-time 5 "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes_custom_precommit',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
else
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
fi
# 로컬 해시 계산 (custom_pre_commit이면 .githooks/pre-commit 제외)
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f ! -path '.githooks/pre-commit' 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
else
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
fi
```
**비교 결과 처리**:
- **서버 조회 실패** (`SERVER_HASH` 비어있음): "⚠️ 서버 연결 불가, 워크플로우 체크를 건너뜁니다" 경고 후 다음 단계 진행
- **일치** (`LOCAL_HASH == SERVER_HASH`): 다음 단계 진행
- **불일치**: "⚠️ 팀 워크플로우가 최신이 아닙니다. 동기화를 실행합니다..." 출력 → **sync-team-workflow 절차를 자동 실행** → 완료 후 원래 작업 계속
### 1. 사전 검증
```bash

파일 보기

@ -30,43 +30,6 @@ CAN_PUSH=$(echo "$PERMISSIONS" | python3 -c "import sys,json; print(json.load(sy
- `CAN_PUSH``False`이면: "push 권한이 필요합니다. 프로젝트 관리자에게 요청하세요." 안내 후 종료
### 0.5. 팀 워크플로우 최신화 확인
`.claude/workflow-version.json`이 존재하지 않으면 이 단계를 건너뛴다 (팀 프로젝트가 아닌 경우).
```bash
# 로컬 설정 읽기
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null)
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null)
CUSTOM_PRECOMMIT=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('custom_pre_commit', False))" 2>/dev/null)
# 서버 해시 조회 (custom_pre_commit이면 pre-commit 제외 해시 사용)
SERVER_VER=$(curl -sf --max-time 5 "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes_custom_precommit',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
else
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
fi
# 로컬 해시 계산 (custom_pre_commit이면 .githooks/pre-commit 제외)
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f ! -path '.githooks/pre-commit' 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
else
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
fi
```
**비교 결과 처리**:
- **서버 조회 실패** (`SERVER_HASH` 비어있음): "⚠️ 서버 연결 불가, 워크플로우 체크를 건너뜁니다" 경고 후 다음 단계 진행
- **일치** (`LOCAL_HASH == SERVER_HASH`): 다음 단계 진행
- **불일치**: "⚠️ 팀 워크플로우가 최신이 아닙니다. 동기화를 실행합니다..." 출력 → **sync-team-workflow 절차를 자동 실행** → 완료 후 원래 작업 계속
### 1. 현재 상태 수집
```bash

파일 보기

@ -29,43 +29,6 @@ IS_ADMIN=$(echo "$PERMISSIONS" | python3 -c "import sys,json; print(json.load(sy
- `IS_ADMIN``False`이면: "릴리즈는 프로젝트 관리자만 실행할 수 있습니다." 안내 후 종료
### 0.5. 팀 워크플로우 최신화 확인
`.claude/workflow-version.json`이 존재하지 않으면 이 단계를 건너뛴다 (팀 프로젝트가 아닌 경우).
```bash
# 로컬 설정 읽기
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null)
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null)
CUSTOM_PRECOMMIT=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('custom_pre_commit', False))" 2>/dev/null)
# 서버 해시 조회 (custom_pre_commit이면 pre-commit 제외 해시 사용)
SERVER_VER=$(curl -sf --max-time 5 "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes_custom_precommit',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
else
SERVER_HASH=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('content_hashes',{}).get('${PROJECT_TYPE}',''))" 2>/dev/null)
fi
# 로컬 해시 계산 (custom_pre_commit이면 .githooks/pre-commit 제외)
if [ "$CUSTOM_PRECOMMIT" = "True" ]; then
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f ! -path '.githooks/pre-commit' 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
else
LOCAL_HASH=$(find .claude/rules .claude/agents .claude/scripts .githooks \
.claude/skills/push .claude/skills/mr .claude/skills/create-mr \
.claude/skills/release .claude/skills/version .claude/skills/fix-issue \
-type f 2>/dev/null | sort | xargs cat 2>/dev/null | shasum -a 256 | cut -d' ' -f1)
fi
```
**비교 결과 처리**:
- **서버 조회 실패** (`SERVER_HASH` 비어있음): "⚠️ 서버 연결 불가, 워크플로우 체크를 건너뜁니다" 경고 후 다음 단계 진행
- **일치** (`LOCAL_HASH == SERVER_HASH`): 다음 단계 진행
- **불일치**: "⚠️ 팀 워크플로우가 최신이 아닙니다. 동기화를 실행합니다..." 출력 → **sync-team-workflow 절차를 자동 실행** → 완료 후 원래 작업 계속
### 1. 사전 검증
- 커밋되지 않은 변경 사항이 있으면 경고 ("먼저 /push로 커밋하세요")

파일 보기

@ -3,163 +3,123 @@ name: sync-team-workflow
description: 팀 글로벌 워크플로우를 현재 프로젝트에 동기화합니다
---
팀 글로벌 워크플로우의 최신 파일을 서버에서 다운로드하여 로컬에 적용합니다.
호출 시 항상 서버 기준으로 전체 동기화합니다 (버전 비교 없음).
팀 글로벌 워크플로우의 최신 버전을 현재 프로젝트에 적용합니다.
## 수행 절차
### 1. 사전 조건 확인
`.claude/workflow-version.json` 존재 확인:
- 없으면 → "/init-project를 먼저 실행해주세요" 안내 후 종료
설정 읽기:
### 1. 글로벌 버전 조회
Gitea API로 template-common 리포의 workflow-version.json 조회:
```bash
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null || echo "https://gitea.gc-si.dev")
PROJECT_TYPE=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('project_type', ''))" 2>/dev/null || echo "")
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json"
```
프로젝트 타입이 비어있으면 자동 감지:
1. `pom.xml` → java-maven
2. `build.gradle` / `build.gradle.kts` → java-gradle
3. `package.json` + `tsconfig.json` → react-ts
4. 감지 실패 → 사용자에게 선택 요청
### 2. 버전 비교
로컬 `.claude/workflow-version.json``applied_global_version` 필드와 비교:
- 버전 일치 → "최신 버전입니다" 안내 후 종료
- 버전 불일치 → 미적용 변경 항목 추출하여 표시
### 3. 프로젝트 타입 감지
자동 감지 순서:
1. `.claude/workflow-version.json``project_type` 필드 확인
2. 없으면: `pom.xml` → java-maven, `build.gradle` → java-gradle, `package.json` → react-ts
### Gitea 파일 다운로드 URL 패턴
⚠️ Gitea raw 파일은 반드시 **web raw URL** 사용:
⚠️ Gitea raw 파일은 반드시 **web raw URL**을 사용해야 합니다 (`/api/v1/` 경로 사용 불가):
```bash
GITEA_URL="${GITEA_URL:-https://gitea.gc-si.dev}"
# common 파일: ${GITEA_URL}/gc/template-common/raw/branch/develop/<파일경로>
# 타입별 파일: ${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/<파일경로>
# 타입별 파일: ${GITEA_URL}/gc/template-<타입>/raw/branch/develop/<파일경로>
# 예시:
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/.claude/rules/team-policy.md"
curl -sf "${GITEA_URL}/gc/template-react-ts/raw/branch/develop/.editorconfig"
```
### 2. 디렉토리 준비
### 4. 파일 다운로드 및 적용
위의 URL 패턴으로 해당 타입 + common 템플릿 파일 다운로드:
필요한 디렉토리가 없으면 생성:
```bash
mkdir -p .claude/rules .claude/agents .claude/scripts
mkdir -p .claude/skills/push .claude/skills/mr .claude/skills/create-mr
mkdir -p .claude/skills/release .claude/skills/version .claude/skills/fix-issue
mkdir -p .githooks
```
### 3. 서버 파일 다운로드 + 적용
각 파일을 `curl -sf` 로 다운로드하여 프로젝트 루트의 동일 경로에 저장.
다운로드 실패한 파일은 경고 출력 후 건너뜀.
#### 3-1. template-common 파일 (덮어쓰기)
**규칙 파일**:
#### 4-1. 규칙 파일 (덮어쓰기)
팀 규칙은 로컬 수정 불가 — 항상 글로벌 최신으로 교체:
```
.claude/rules/team-policy.md
.claude/rules/git-workflow.md
.claude/rules/release-notes-guide.md
.claude/rules/subagent-policy.md
.claude/rules/code-style.md (타입별)
.claude/rules/naming.md (타입별)
.claude/rules/testing.md (타입별)
```
**에이전트 파일**:
#### 4-1b. 에이전트 파일 (덮어쓰기)
```
.claude/agents/explorer.md
.claude/agents/implementer.md
.claude/agents/reviewer.md
```
**스킬 파일 (6종)**:
```
.claude/skills/push/SKILL.md
.claude/skills/mr/SKILL.md
.claude/skills/create-mr/SKILL.md
.claude/skills/release/SKILL.md
.claude/skills/version/SKILL.md
.claude/skills/fix-issue/SKILL.md
#### 4-2. settings.json (부분 갱신)
⚠️ settings.json은 **타입별 템플릿**에서 다운로드 (template-common에는 없음):
```bash
curl -sf "${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/.claude/settings.json"
```
**Hook 스크립트**:
다운로드한 최신 settings.json과 로컬 settings.json을 비교하여 부분 갱신:
- `env`: 글로벌 최신으로 교체 (CLAUDE_BOT_TOKEN 등 팀 공통 환경변수)
- `deny` 목록: 글로벌 최신으로 교체
- `allow` 목록: 기존 사용자 커스텀 유지 + 글로벌 기본값 병합
- `hooks`: init-project SKILL.md의 hooks JSON 블록을 참조하여 교체 (없으면 추가)
- SessionStart(compact) → on-post-compact.sh
- PreCompact → on-pre-compact.sh
- PostToolUse(Bash) → on-commit.sh
#### 4-3. 스킬 파일 (덮어쓰기)
```
.claude/skills/create-mr/SKILL.md
.claude/skills/fix-issue/SKILL.md
.claude/skills/sync-team-workflow/SKILL.md
.claude/skills/init-project/SKILL.md
.claude/skills/push/SKILL.md
.claude/skills/mr/SKILL.md
.claude/skills/release/SKILL.md
.claude/skills/version/SKILL.md
```
#### 4-4. Git Hooks (덮어쓰기 + 실행 권한)
`commit-msg`, `post-checkout`**항상 팀 표준으로 교체** (팀 커뮤니케이션 규칙 + 인프라).
`pre-commit``.claude/workflow-version.json``custom_pre_commit` 플래그를 확인:
- `"custom_pre_commit": true` → pre-commit 건너뜀 (프로젝트 커스텀 유지), "⚠️ pre-commit은 프로젝트 커스텀 유지" 로그
- 플래그 없거나 false → 팀 표준으로 교체
```bash
chmod +x .githooks/*
```
#### 4-5. Hook 스크립트 갱신
init-project SKILL.md의 코드 블록에서 최신 스크립트를 추출하여 덮어쓰기:
```
.claude/scripts/on-pre-compact.sh
.claude/scripts/on-post-compact.sh
.claude/scripts/on-commit.sh
```
실행 권한 부여: `chmod +x .claude/scripts/*.sh`
**Git Hooks** (commit-msg, post-checkout은 항상 교체):
```
.githooks/commit-msg
.githooks/post-checkout
```
다운로드 예시:
```bash
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/.claude/rules/team-policy.md" -o ".claude/rules/team-policy.md"
```
#### 3-2. template-{type} 파일 (타입별 덮어쓰기)
```
.claude/rules/code-style.md
.claude/rules/naming.md
.claude/rules/testing.md
```
**pre-commit hook**:
`.claude/workflow-version.json``custom_pre_commit` 플래그 확인:
- `"custom_pre_commit": true` → pre-commit 건너뜀, "⚠️ pre-commit은 프로젝트 커스텀 유지" 로그
- 플래그 없거나 false → `.githooks/pre-commit` 교체
다운로드 예시:
```bash
curl -sf "${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/.claude/rules/code-style.md" -o ".claude/rules/code-style.md"
```
#### 3-3. 실행 권한 부여
```bash
chmod +x .githooks/* 2>/dev/null
chmod +x .claude/scripts/*.sh 2>/dev/null
```
### 4. settings.json 부분 머지
⚠️ settings.json은 **타입별 템플릿**에서 다운로드 (template-common에는 없음):
```bash
SERVER_SETTINGS=$(curl -sf "${GITEA_URL}/gc/template-${PROJECT_TYPE}/raw/branch/develop/.claude/settings.json")
```
다운로드한 최신 settings.json과 로컬 `.claude/settings.json`을 비교하여 부분 갱신:
- `env`: 서버 최신으로 교체
- `deny` 목록: 서버 최신으로 교체
- `allow` 목록: 기존 사용자 커스텀 유지 + 서버 기본값 병합
- `hooks`: 서버 최신으로 교체
### 5. workflow-version.json 갱신
서버의 최신 `workflow-version.json` 조회:
```bash
SERVER_VER=$(curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json")
SERVER_VERSION=$(echo "$SERVER_VER" | python3 -c "import sys,json; print(json.load(sys.stdin).get('version',''))")
```
`.claude/workflow-version.json` 업데이트:
### 5. 로컬 버전 업데이트
`.claude/workflow-version.json` 갱신:
```json
{
"applied_global_version": "<서버 version>",
"applied_date": "<현재날짜>",
"project_type": "<프로젝트타입>",
"gitea_url": "<GITEA_URL>"
"applied_global_version": "새버전",
"applied_date": "오늘날짜",
"project_type": "감지된타입",
"gitea_url": "https://gitea.gc-si.dev"
}
```
기존 필드(`custom_pre_commit` 등)는 보존.
### 6. 변경 보고
- 다운로드/갱신된 파일 목록 출력
- 서버 `workflow-version.json``changes` 중 최신 항목 표시
- 결과 형태:
```
✅ 팀 워크플로우 동기화 완료
버전: v1.6.0
갱신 파일: 22개 (rules 7, agents 3, skills 6, scripts 3, hooks 3)
settings.json: 부분 갱신 (env, deny, hooks)
```
## 필요 환경변수
없음 (Gitea raw URL은 인증 불필요)
- `git diff`로 변경 내역 확인
- 업데이트된 파일 목록 출력
- 변경 로그(글로벌 workflow-version.json의 changes) 표시
- 필요한 추가 조치 안내 (빌드 확인, 의존성 업데이트 등)

파일 보기

@ -1,6 +1,6 @@
{
"applied_global_version": "1.6.1",
"applied_date": "2026-03-08",
"applied_global_version": "1.5.0",
"applied_date": "2026-03-01",
"project_type": "java-maven",
"gitea_url": "https://gitea.gc-si.dev"
}

파일 보기

@ -109,8 +109,8 @@ jobs:
echo "--- Starting service ---"
systemctl start signal-batch
# 5단계: 기동 확인 (최대 180초 — 64GB 힙 AlwaysPreTouch + 캐시 워밍업)
for i in $(seq 1 180); do
# 5단계: 기동 확인 (최대 90초 — 64GB 힙 AlwaysPreTouch)
for i in $(seq 1 90); do
if curl -sf "$BASE_URL/actuator/health/liveness" > /dev/null 2>&1; then
echo "Service started successfully (${i}s)"
curl -s "$BASE_URL/actuator/health"

파일 보기

@ -4,139 +4,5 @@
## [Unreleased]
## [2026-03-27.3]
### 추가
- 비정상 궤적 포함 저장 플래그 (`include-abnormal-in-tracks`) — 강화학습 데이터 수집용
### 수정
- REST API 경로 client_id 수집 누락 수정 — JWT 쿠키 파싱 공용 메서드 추출
## [2026-03-27.2]
### 수정
- Top 클라이언트 IP/ID 토글 활성 상태 구분 및 표시 오류 수정
- 쿼리 이력(메트릭 페이지)에 사용자 ID 컬럼 추가
## [2026-03-27]
### 추가
- WebSocket 리플레이 쿼리 L1/L2 캐시 통합 — HOURLY/5MIN 구간 DB 의존 제거, 당일 쿼리 100% 캐시
- 쿼리 메트릭 사용자 ID 수집 — GC_SESSION JWT에서 인증된 사용자 email 추출
- 대시보드 Top 클라이언트 IP/ID 토글 — groupBy 파라미터로 IP 기준 또는 사용자 ID 기준 전환
### 수정
- vessel info SQL 컬럼명 오류 수정 (ship_nm → name) — 선박 정보 조회 실패("bad SQL grammar") 해결
## [2026-03-19]
### 변경
- CI/CD 배포 health check 대기 90초→180초 확장 — 64GB 힙 기동 타임아웃 대응
### 기타
- AIS API 접속 계정 변경
## [2026-03-18]
### 수정
- AIS Import Job 스케줄 :15초→:45초 변경 — API 서버 데이터 적재 타이밍 변경으로 빈 응답(0건) 빈발 대응
## [2026-03-17]
### 추가
- 최근 선박 위치 상세 조회 API (`POST /api/v1/vessels/recent-positions-detail`) — 공간 필터(폴리곤/원) + AIS 상세 필드(callSign, status, destination, eta, draught, length, width)
### 변경
- AIS API WebClient 버퍼 50MB→100MB 확장 — 피크 시 DataBufferLimitException 대응
## [2026-03-13]
### 추가
- 다중구역/STS API 최적화 — AreaSearch/VesselContact 동시성·메모리 관리 통합, 순차 통과 SQL 동적 N-구역(2~10) 확장, chnPrmShipOnly 파라미터 추가
### 변경
- 성능 최적화 — ArrayList 사전 할당, JTS Coordinate 재사용, equirectangular 거리 근사, stream→단일 루프 전환
- DataPipeline 대시보드 차트 시각화 개선
## [2026-03-10]
### 추가
- 쿼리 메트릭 수집 확장 + 대시보드 성능 차트 — client IP 수집(REST/WS), 응답 크기 추정, timeseries API, 대시보드 쿼리 성능 차트 5종(응답시간·볼륨·캐시경로·응답크기·Top 클라이언트)
- API/WS 쿼리 메트릭 이력 조회 기능 — BufferService(batch flush) + /history, /summary API + 프론트엔드 요약카드·필터·페이지네이션
## [2026-03-09]
### 수정
- queryWithCache 단일 소스(DB/캐시) 응답 소실 버그 수정 — mergeTracksByVessel() 참조 공유 시 allTracks.clear()로 결과 파괴
### 변경
- 운영 로그 레벨 정리 — CACHE-MONITOR 루틴 로그(putAll/get) DEBUG 전환, 중요 이벤트(removeRange/simplify) INFO 유지
- Spring Batch/HikariCP 로그 INFO→WARN 하향
### 기타
- t_vessel_tracks_daily 파티션 영구 보존 설정 추가 (기본 3개월→무한)
## [2026-03-08]
### 추가
- L3 Daily 캐시 DP(Douglas-Peucker) 사전 간소화 — tolerance 0.001(~100m)로 직선 구간 제거, 방향 변화 보존
- Daily 캐시 인메모리 보관 기간 7일→14일 확대 (maxMemory 6→10GB)
- 간소화 후 Haversine 기반 속도 재계산 (recalculateSpeeds)
### 변경
- Query DataSource: work_mem 256MB + synchronous_commit off 세션 튜닝
- Batch DataSource: synchronous_commit off 세션 튜닝
### 기타
- 팀 워크플로우 v1.5.0→v1.6.1 동기화
## [2026-03-02]
### 추가
- React 19 SPA Dashboard (7페이지: Dashboard, JobMonitor, DataPipeline, AreaStats, ApiExplorer, AbnormalTracks, ApiMetrics)
- 다계층 인메모리 캐시(L1/L2/L3) 조회 통합 + CACHE-MONITOR 로그
- Ship-GIS 기능 이관 — 최근위치/선박항적/뷰포트 리플레이
- 다중구역이동 항적 분석 + STS 접촉 분석 프론트엔드 이관
- 구역분석/STS 보고서 모달 + 이미지 저장
- 항적/리플레이 선종 아이콘 + Raw Data 패널
- DataPipeline 일별 차트 시각화 개선 — Stacked Bar + Duration Bar
- ChnPrmShip 전용 DB 이력 + API enrichment + ShipImage V2
- 중국허가선박 최신 위치 조회 API
- recent-positions IMO 필드 + 선박사진 보유 목록 API + 사진 enrichment
- Stale 데이터 비정상 궤적 전환 — 과거 timestamp 수신 시 정보 보존
- L1/L2/L3 캐시 O(1) 키 기반 직접 조회 (전체 스캔 O(n) 대체)
- 64GB JVM 메모리 예산 논리적 파티셔닝 (캐시 35GB / 쿼리 20GB / 시스템 9GB)
- L2 HourlyTrackCache 6시간 경과 엔트리 Nth-point 간소화 스케줄러
- 메모리 예산 모니터링 API (`GET /api/monitoring/cache/budget`)
### 수정
- cancelQuery idempotent 처리 — 완료된 쿼리 취소 시 에러 대신 정상 응답
- parseTimestamp 실패 로깅 추가, isNightTimeContact 야간 판정 로직 단순화
- ST_AsText WKT 공백 불일치로 인한 daily merge 전량 필터 수정
- L2 워밍업 범위 확장 — Daily Job 전 기동 시 어제 데이터 포함
- html2canvas oklch/oklab 색상 파싱 에러 수정
- 항적 조회 500 에러 + 리플레이 쿼리 무반응 수정
- shipimg 경로 충돌 수정 — /{imo} 숫자 패턴 제약 추가
- UTC 타임존 변환 + Daily 캐시 부분 fallback 추가
- V2 캐시 조회 시 누락 MMSI DB fallback 추가
- 캐시 maxSize 설정 경로 수정 — application.yml이 실제 소스
- 해구 통계 ROUND 함수 타입 캐스팅 오류 수정
- 해구 조회 ST_Contains 제거 — 바운딩 박스 조인으로 간소화
- Dashboard API 연동 오류 수정 — 캐시 모니터링 + 렌더링 안전성
- MonitoringController 레거시 타일 쿼리 → AIS 위치/항적 기반 전환
### 변경
- SignalKindCode 매핑 규칙 개선 — aton/tug/tender→DEFAULT, shipName BUOY 검출 추가
- 응답 경로 signal_kind_code 치환 1회화 — 캐시 저장 시 치환, 응답 시 DB/캐시 값 직접 사용
- ChunkedTrackStreamingService 전수 최적화 — isQueryCancelled 버그수정, QueryContext 스레드 안전성, 쿼리 메트릭 DB 저장, 데드코드 400줄 삭제, VesselInfo N+1 해소
- API 응답 크기 최적화 — gzip 압축, NON_NULL, 정밀도 제한
- API 응답 최적화 + 점진적 렌더링 + 해구 choropleth 지도
- Hourly Job 인메모리 병합 전환 — N+1 SQL 제거
- Daily Job 인메모리 캐시 기반 최적화 — N+1 SQL 제거
- L1/L2 캐시 maxSize 실측 기반 상향 (L2 3.5M→7M)
- SNP API 전환 및 레거시 코드 전면 정리
### 기타
- Gitea Actions CI/CD 파이프라인 + systemd 서비스 구성
- 팀 워크플로우 v1.2.0→v1.5.0 동기화
- Swagger UI 현행화 — 서버 URL, DTO @Schema, @Parameter
- settings.json에 CLAUDE_BOT_TOKEN 환경변수 추가

파일 보기

@ -6,10 +6,6 @@ import type {
HaeguStat,
MetricsSummary,
ProcessingDelay,
QueryMetricsPage,
QueryMetricsParams,
QueryMetricsSummary,
QueryMetricsTimeSeries,
ThroughputMetrics,
} from './types.ts'
@ -49,26 +45,4 @@ export const monitorApi = {
getHaeguStats(): Promise<Record<string, unknown>[]> {
return fetchJson('/admin/haegu/stats')
},
getQueryMetricsHistory(params: QueryMetricsParams): Promise<QueryMetricsPage> {
const qs = new URLSearchParams()
if (params.queryType) qs.set('queryType', params.queryType)
if (params.dataPath) qs.set('dataPath', params.dataPath)
if (params.status) qs.set('status', params.status)
if (params.elapsedMsMin != null) qs.set('elapsedMsMin', String(params.elapsedMsMin))
if (params.elapsedMsMax != null) qs.set('elapsedMsMax', String(params.elapsedMsMax))
qs.set('page', String(params.page ?? 0))
qs.set('size', String(params.size ?? 20))
qs.set('sortBy', params.sortBy ?? 'created_at')
qs.set('sortDir', params.sortDir ?? 'desc')
return fetchJson(`/api/monitoring/query-metrics/history?${qs}`)
},
getQueryMetricsSummary(hours = 24): Promise<QueryMetricsSummary> {
return fetchJson(`/api/monitoring/query-metrics/summary?hours=${hours}`)
},
getQueryMetricsTimeSeries(days = 7, groupBy: 'ip' | 'id' = 'ip'): Promise<QueryMetricsTimeSeries> {
return fetchJson(`/api/monitoring/query-metrics/timeseries?days=${days}&groupBy=${groupBy}`)
},
}

파일 보기

@ -187,97 +187,6 @@ export interface ThroughputMetrics {
partitionSizes: PartitionSize[]
}
/* Query Metrics (쿼리 이력) */
export interface QueryMetricRow {
query_id: string
query_type: string
created_at: string
data_path: string
status: string
zoom_level: number | null
requested_mmsi: number
unique_vessels: number
total_points: number
points_after_simplify: number
total_chunks: number
response_bytes: number
elapsed_ms: number
db_query_ms: number
simplify_ms: number
cache_hit_days: number
db_query_days: number
client_ip: string | null
client_id: string | null
}
export interface QueryMetricsPage {
content: QueryMetricRow[]
totalElements: number
totalPages: number
currentPage: number
pageSize: number
}
export interface QueryMetricsSummary {
total_queries: number
avg_elapsed_ms: number
p95_elapsed_ms: number
max_elapsed_ms: number
ws_count: number
rest_count: number
cache_only_count: number
db_only_count: number
hybrid_count: number
completed_count: number
failed_count: number
avg_vessels: number
avg_points_before: number
avg_points_after: number
avg_response_size_bytes: number
}
/* Query Metrics TimeSeries */
export interface TimeSeriesBucket {
bucket: string
query_count: number
avg_elapsed_ms: number
max_elapsed_ms: number
avg_response_bytes: number
ws_count: number
rest_count: number
cache_count: number
db_count: number
hybrid_count: number
}
export interface TopClient {
client: string
client_ip?: string
query_count: number
avg_elapsed_ms: number
}
export interface QueryMetricsTimeSeries {
buckets: TimeSeriesBucket[]
topClients: TopClient[]
granularity: 'HOURLY' | 'DAILY'
groupBy?: 'ip' | 'id'
}
export interface QueryMetricsParams {
queryType?: string
dataPath?: string
status?: string
elapsedMsMin?: number
elapsedMsMax?: number
page?: number
size?: number
sortBy?: string
sortDir?: 'asc' | 'desc'
}
/* Monitor — Data Quality */
export interface DataQuality {

파일 보기

@ -21,7 +21,6 @@ interface LineChartProps {
xKey: string
height?: number
label?: string
yFormatter?: (value: number) => string
}
export default function LineChart({
@ -30,7 +29,6 @@ export default function LineChart({
xKey,
height = 240,
label,
yFormatter,
}: LineChartProps) {
return (
<div>
@ -48,7 +46,6 @@ export default function LineChart({
tick={{ fontSize: 12, fill: 'var(--sb-text-muted)' }}
axisLine={false}
tickLine={false}
tickFormatter={yFormatter}
/>
<Tooltip
contentStyle={{
@ -57,7 +54,6 @@ export default function LineChart({
borderRadius: 'var(--sb-radius)',
fontSize: 12,
}}
formatter={yFormatter ? (v: number) => yFormatter(v) : undefined}
/>
{series.length > 1 && (
<Legend

파일 보기

@ -16,10 +16,6 @@ interface DataTableProps<T> {
onRowClick?: (row: T) => void
emptyMessage?: string
pageSize?: number
// Server-side pagination (optional)
totalElements?: number
currentPage?: number
onPageChange?: (page: number) => void
}
export default function DataTable<T>({
@ -29,19 +25,14 @@ export default function DataTable<T>({
onRowClick,
emptyMessage,
pageSize = 20,
totalElements,
currentPage,
onPageChange,
}: DataTableProps<T>) {
const { t } = useI18n()
const [sortKey, setSortKey] = useState<string | null>(null)
const [sortAsc, setSortAsc] = useState(true)
const [page, setPage] = useState(0)
const isServerSide = totalElements != null && currentPage != null && onPageChange != null
const sorted = useMemo(() => {
if (isServerSide || !sortKey) return data
if (!sortKey) return data
return [...data].sort((a, b) => {
const av = (a as Record<string, unknown>)[sortKey]
const bv = (b as Record<string, unknown>)[sortKey]
@ -49,12 +40,10 @@ export default function DataTable<T>({
const cmp = av < bv ? -1 : av > bv ? 1 : 0
return sortAsc ? cmp : -cmp
})
}, [data, sortKey, sortAsc, isServerSide])
}, [data, sortKey, sortAsc])
const effectivePage = isServerSide ? currentPage! : page
const total = isServerSide ? totalElements! : sorted.length
const totalPages = Math.ceil(total / pageSize)
const paged = isServerSide ? sorted : sorted.slice(effectivePage * pageSize, (effectivePage + 1) * pageSize)
const totalPages = Math.ceil(sorted.length / pageSize)
const paged = sorted.slice(page * pageSize, (page + 1) * pageSize)
const handleSort = (key: string) => {
if (sortKey === key) {
@ -65,14 +54,6 @@ export default function DataTable<T>({
}
}
const handlePageChange = (newPage: number) => {
if (isServerSide) {
onPageChange!(newPage)
} else {
setPage(newPage)
}
}
return (
<div>
<div className="sb-table-wrapper">
@ -86,7 +67,7 @@ export default function DataTable<T>({
style={{ textAlign: col.align ?? 'left', cursor: col.sortable !== false ? 'pointer' : 'default' }}
>
{col.label}
{sortKey === col.key && (sortAsc ? ' ▲' : ' ▼')}
{sortKey === col.key && (sortAsc ? ' \u25B2' : ' \u25BC')}
</th>
))}
</tr>
@ -121,19 +102,19 @@ export default function DataTable<T>({
{totalPages > 1 && (
<div className="mt-3 flex items-center justify-between text-sm text-muted">
<span>
{total}{t('common.items')} {t('common.of')} {effectivePage * pageSize + 1}-{Math.min((effectivePage + 1) * pageSize, total)}
{sorted.length}{t('common.items')} {t('common.of')} {page * pageSize + 1}-{Math.min((page + 1) * pageSize, sorted.length)}
</span>
<div className="flex gap-1">
<button
onClick={() => handlePageChange(Math.max(0, effectivePage - 1))}
disabled={effectivePage === 0}
onClick={() => setPage(p => Math.max(0, p - 1))}
disabled={page === 0}
className="rounded border border-border px-2 py-1 disabled:opacity-40"
>
{t('common.prev')}
</button>
<button
onClick={() => handlePageChange(Math.min(totalPages - 1, effectivePage + 1))}
disabled={effectivePage >= totalPages - 1}
onClick={() => setPage(p => Math.min(totalPages - 1, p + 1))}
disabled={page >= totalPages - 1}
className="rounded border border-border px-2 py-1 disabled:opacity-40"
>
{t('common.next')}

파일 보기

@ -49,16 +49,6 @@ const en = {
'dashboard.hits': 'Hits',
'dashboard.misses': 'Misses',
'dashboard.dailyVolume': 'Daily Processing Volume',
'dashboard.queryPerformance': 'Query Performance',
'dashboard.responseTimeTrend': 'Response Time Trend',
'dashboard.queryVolume': 'Query Volume',
'dashboard.cachePathRatio': 'Cache Path Ratio',
'dashboard.responseSizeTrend': 'Response Size Trend',
'dashboard.topClients': 'Top Clients',
'dashboard.avgElapsed': 'Avg',
'dashboard.maxElapsed': 'Max',
'dashboard.queries': 'queries',
'dashboard.noChartData': 'No chart data available',
// Job Monitor
'jobs.title': 'Job Monitor',
@ -180,26 +170,8 @@ const en = {
'metrics.cacheHitSummary': 'Cache Hit Summary',
'metrics.hits': 'Hits',
'metrics.misses': 'Misses',
'metrics.queryHistory': 'Query History',
'metrics.totalQueries': 'Total Queries',
'metrics.avgElapsed': 'Avg Response',
'metrics.p95Elapsed': 'P95 Response',
'metrics.cacheHitRate': 'Cache Hit Rate',
'metrics.queryType': 'Type',
'metrics.dataPath': 'Path',
'metrics.queryStatus': 'Status',
'metrics.queryTime': 'Time',
'metrics.vessels': 'Vessels',
'metrics.pointsBefore': 'Points(Before)',
'metrics.pointsAfter': 'Points(After)',
'metrics.simplification': 'Reduction',
'metrics.chunks': 'Chunks',
'metrics.elapsed': 'Elapsed',
'metrics.allTypes': 'All',
'metrics.allPaths': 'All',
'metrics.resetFilters': 'Reset Filters',
'metrics.responseSize': 'Size',
'metrics.clientIp': 'IP',
'metrics.dbMetricsPlaceholder': 'API/WS History Metrics (Coming Soon)',
'metrics.dbMetricsDesc': 'REST/WebSocket request history, response sizes, latency DB storage + query',
// Time Range
'range.1d': '1D',

파일 보기

@ -49,16 +49,6 @@ const ko = {
'dashboard.hits': '히트',
'dashboard.misses': '미스',
'dashboard.dailyVolume': '일별 처리량',
'dashboard.queryPerformance': '쿼리 성능',
'dashboard.responseTimeTrend': '응답시간 추이',
'dashboard.queryVolume': '쿼리 볼륨',
'dashboard.cachePathRatio': '캐시/경로 비율',
'dashboard.responseSizeTrend': '응답 크기 추이',
'dashboard.topClients': 'Top 클라이언트',
'dashboard.avgElapsed': '평균',
'dashboard.maxElapsed': '최대',
'dashboard.queries': '건',
'dashboard.noChartData': '차트 데이터가 없습니다',
// Job Monitor
'jobs.title': 'Job 모니터',
@ -180,26 +170,8 @@ const ko = {
'metrics.cacheHitSummary': '캐시 히트 요약',
'metrics.hits': '히트',
'metrics.misses': '미스',
'metrics.queryHistory': '쿼리 이력',
'metrics.totalQueries': '총 쿼리',
'metrics.avgElapsed': '평균 응답',
'metrics.p95Elapsed': 'P95 응답',
'metrics.cacheHitRate': '캐시 적중률',
'metrics.queryType': '유형',
'metrics.dataPath': '경로',
'metrics.queryStatus': '상태',
'metrics.queryTime': '시각',
'metrics.vessels': '선박',
'metrics.pointsBefore': '포인트(전)',
'metrics.pointsAfter': '포인트(후)',
'metrics.simplification': '간소화',
'metrics.chunks': '청크',
'metrics.elapsed': '응답시간',
'metrics.allTypes': '전체',
'metrics.allPaths': '전체',
'metrics.resetFilters': '필터 초기화',
'metrics.responseSize': '응답 크기',
'metrics.clientIp': 'IP',
'metrics.dbMetricsPlaceholder': 'API/WS 이력 메트릭 (향후 구현)',
'metrics.dbMetricsDesc': 'REST/WebSocket 요청 이력, 응답 크기, 소요시간 DB 저장 + 조회',
// Time Range
'range.1d': '1일',

파일 보기

@ -1,22 +1,12 @@
import { useState, useCallback } from 'react'
import { usePoller } from '../hooks/usePoller.ts'
import { useCachedState } from '../hooks/useCachedState.ts'
import { useI18n } from '../hooks/useI18n.ts'
import { monitorApi } from '../api/monitorApi.ts'
import type { MetricsSummary, CacheStats, ProcessingDelay, CacheDetails, QueryMetricsPage, QueryMetricsSummary, QueryMetricsParams, QueryMetricRow } from '../api/types.ts'
import type { MetricsSummary, CacheStats, ProcessingDelay, CacheDetails } from '../api/types.ts'
import MetricCard from '../components/charts/MetricCard.tsx'
import DataTable, { type Column } from '../components/common/DataTable.tsx'
import { formatNumber, formatBytes } from '../utils/formatters.ts'
import { formatNumber } from '../utils/formatters.ts'
const POLL_INTERVAL = 10_000
const QUERY_POLL_INTERVAL = 30_000
const ELAPSED_RANGES = [
{ label: '< 1s', min: undefined, max: 999 },
{ label: '1-5s', min: 1000, max: 5000 },
{ label: '5-30s', min: 5000, max: 30000 },
{ label: '> 30s', min: 30000, max: undefined },
] as const
export default function ApiMetrics() {
const { t } = useI18n()
@ -25,13 +15,6 @@ export default function ApiMetrics() {
const [cacheDetails, setCacheDetails] = useCachedState<CacheDetails | null>('api.cacheDetail', null)
const [delay, setDelay] = useCachedState<ProcessingDelay | null>('api.delay', null)
// Query History state
const [filter, setFilter] = useState<QueryMetricsParams>({
page: 0, size: 20, sortBy: 'created_at', sortDir: 'desc',
})
const [historyData, setHistoryData] = useState<QueryMetricsPage | null>(null)
const [summaryData, setSummaryData] = useState<QueryMetricsSummary | null>(null)
usePoller(() => {
monitorApi.getMetricsSummary().then(setMetrics).catch(() => {})
monitorApi.getCacheStats().then(setCache).catch(() => {})
@ -39,109 +22,10 @@ export default function ApiMetrics() {
monitorApi.getDelay().then(setDelay).catch(() => {})
}, POLL_INTERVAL)
const fetchQueryData = useCallback(() => {
monitorApi.getQueryMetricsHistory(filter).then(setHistoryData).catch(() => {})
monitorApi.getQueryMetricsSummary(24).then(setSummaryData).catch(() => {})
}, [filter])
usePoller(fetchQueryData, QUERY_POLL_INTERVAL, [filter])
const updateFilter = (patch: Partial<QueryMetricsParams>) => {
setFilter(prev => ({ ...prev, page: 0, ...patch }))
}
const resetFilters = () => {
setFilter({ page: 0, size: 20, sortBy: 'created_at', sortDir: 'desc' })
}
const memUsed = metrics?.memory.used ?? 0
const memMax = metrics?.memory.max ?? 1
const memPct = Math.round((memUsed / memMax) * 100)
// Summary computed values
const totalQueries = summaryData?.total_queries ?? 0
const cacheHitRate = totalQueries > 0
? ((summaryData?.cache_only_count ?? 0) / totalQueries * 100).toFixed(1)
: '0.0'
const historyColumns: Column<QueryMetricRow>[] = [
{
key: 'created_at', label: t('metrics.queryTime'), sortable: false,
render: (row) => {
if (!row.created_at) return '-'
const d = new Date(row.created_at)
// UTC → KST (+9h)
const kst = new Date(d.getTime() + 9 * 60 * 60 * 1000)
const mm = String(kst.getUTCMonth() + 1).padStart(2, '0')
const dd = String(kst.getUTCDate()).padStart(2, '0')
const hh = String(kst.getUTCHours()).padStart(2, '0')
const mi = String(kst.getUTCMinutes()).padStart(2, '0')
const ss = String(kst.getUTCSeconds()).padStart(2, '0')
return `${mm}-${dd} ${hh}:${mi}:${ss}`
},
},
{
key: 'query_type', label: t('metrics.queryType'), sortable: false,
render: (row) => {
const isWs = row.query_type === 'WEBSOCKET'
return <span className={`inline-block rounded px-1.5 py-0.5 text-xs font-medium ${isWs ? 'bg-blue-100 text-blue-700 dark:bg-blue-900 dark:text-blue-300' : 'bg-emerald-100 text-emerald-700 dark:bg-emerald-900 dark:text-emerald-300'}`}>{isWs ? 'WS' : 'REST'}</span>
},
},
{
key: 'data_path', label: t('metrics.dataPath'), sortable: false,
render: (row) => {
const path = row.data_path ?? ''
const color = path === 'CACHE' ? 'bg-emerald-100 text-emerald-700 dark:bg-emerald-900 dark:text-emerald-300'
: path === 'DB' ? 'bg-amber-100 text-amber-700 dark:bg-amber-900 dark:text-amber-300'
: 'bg-violet-100 text-violet-700 dark:bg-violet-900 dark:text-violet-300'
return <span className={`inline-block rounded px-1.5 py-0.5 text-xs font-medium ${color}`}>{path}</span>
},
},
{
key: 'status', label: t('metrics.queryStatus'), sortable: false,
render: (row) => {
const ok = row.status === 'COMPLETED'
return <span className={`inline-block rounded px-1.5 py-0.5 text-xs font-medium ${ok ? 'bg-emerald-100 text-emerald-700 dark:bg-emerald-900 dark:text-emerald-300' : 'bg-red-100 text-red-700 dark:bg-red-900 dark:text-red-300'}`}>{row.status}</span>
},
},
{ key: 'unique_vessels', label: t('metrics.vessels'), align: 'right' as const, sortable: false,
render: (row) => formatNumber(row.unique_vessels) },
{ key: 'total_points', label: t('metrics.pointsBefore'), align: 'right' as const, sortable: false,
render: (row) => formatNumber(row.total_points) },
{ key: 'points_after_simplify', label: t('metrics.pointsAfter'), align: 'right' as const, sortable: false,
render: (row) => formatNumber(row.points_after_simplify) },
{
key: 'reduction', label: t('metrics.simplification'), align: 'right' as const, sortable: false,
render: (row) => {
const before = row.total_points || 0
const after = row.points_after_simplify || 0
if (before === 0) return '-'
return `${((1 - after / before) * 100).toFixed(0)}%`
},
},
{ key: 'total_chunks', label: t('metrics.chunks'), align: 'right' as const, sortable: false },
{
key: 'elapsed_ms', label: t('metrics.elapsed'), align: 'right' as const, sortable: false,
render: (row) => {
const ms = row.elapsed_ms || 0
const color = ms < 1000 ? 'text-success' : ms < 5000 ? 'text-warning' : 'text-danger'
return <span className={`font-mono font-medium ${color}`}>{ms < 1000 ? `${ms}ms` : `${(ms / 1000).toFixed(1)}s`}</span>
},
},
{
key: 'response_bytes', label: t('metrics.responseSize'), align: 'right' as const, sortable: false,
render: (row) => row.response_bytes ? formatBytes(row.response_bytes) : '-',
},
{
key: 'client_ip', label: t('metrics.clientIp'), sortable: false,
render: (row) => row.client_ip ? <span className="font-mono text-xs">{row.client_ip}</span> : '-',
},
{
key: 'client_id', label: 'ID', sortable: false,
render: (row) => row.client_id ? <span className="font-mono text-xs">{row.client_id}</span> : '-',
},
]
return (
<div className="space-y-6 fade-in">
<h1 className="text-2xl font-bold">{t('metrics.title')}</h1>
@ -294,114 +178,12 @@ export default function ApiMetrics() {
</div>
</div>
{/* Query History Section */}
<div className="sb-card">
<div className="sb-card-header">{t('metrics.queryHistory')}</div>
{/* Summary Cards */}
<div className="mb-4 grid grid-cols-2 gap-3 lg:grid-cols-4">
<MetricCard
title={t('metrics.totalQueries')}
value={summaryData ? formatNumber(totalQueries) : '-'}
subtitle={summaryData ? `WS:${summaryData.ws_count} / REST:${summaryData.rest_count}` : undefined}
/>
<MetricCard
title={t('metrics.avgElapsed')}
value={summaryData ? `${((summaryData.avg_elapsed_ms ?? 0) / 1000).toFixed(1)}s` : '-'}
/>
<MetricCard
title={t('metrics.p95Elapsed')}
value={summaryData ? `${((summaryData.p95_elapsed_ms ?? 0) / 1000).toFixed(1)}s` : '-'}
/>
<MetricCard
title={t('metrics.cacheHitRate')}
value={summaryData ? `${cacheHitRate}%` : '-'}
subtitle={summaryData ? `C:${summaryData.cache_only_count}/DB:${summaryData.db_only_count}/H:${summaryData.hybrid_count}` : undefined}
/>
{/* Placeholder for future DB-based metrics */}
<div className="sb-card border-dashed">
<div className="py-6 text-center text-sm text-muted">
<p>{t('metrics.dbMetricsPlaceholder')}</p>
<p className="mt-1 text-xs opacity-60">{t('metrics.dbMetricsDesc')}</p>
</div>
{/* Filters */}
<div className="mb-4 flex flex-wrap items-center gap-3 text-sm">
{/* Query Type toggle */}
<div className="flex items-center gap-1">
<span className="text-muted mr-1">{t('metrics.queryType')}:</span>
{[undefined, 'WEBSOCKET', 'REST_V2'].map((val) => (
<button
type="button"
key={val ?? 'all'}
onClick={() => updateFilter({ queryType: val })}
className={`rounded px-2 py-1 text-xs font-medium transition ${
filter.queryType === val
? 'bg-primary text-white'
: 'bg-surface-secondary text-muted hover:bg-surface-tertiary'
}`}
>
{val == null ? t('metrics.allTypes') : val === 'WEBSOCKET' ? 'WS' : 'REST'}
</button>
))}
</div>
{/* Data Path toggle */}
<div className="flex items-center gap-1">
<span className="text-muted mr-1">{t('metrics.dataPath')}:</span>
{[undefined, 'CACHE', 'DB', 'HYBRID'].map((val) => (
<button
type="button"
key={val ?? 'all'}
onClick={() => updateFilter({ dataPath: val })}
className={`rounded px-2 py-1 text-xs font-medium transition ${
filter.dataPath === val
? 'bg-primary text-white'
: 'bg-surface-secondary text-muted hover:bg-surface-tertiary'
}`}
>
{val ?? t('metrics.allPaths')}
</button>
))}
</div>
{/* Elapsed Time select */}
<select
title={t('metrics.elapsed')}
value={filter.elapsedMsMin != null ? `${filter.elapsedMsMin}-${filter.elapsedMsMax ?? ''}` : ''}
onChange={(e) => {
if (!e.target.value) {
updateFilter({ elapsedMsMin: undefined, elapsedMsMax: undefined })
} else {
const range = ELAPSED_RANGES.find(r =>
`${r.min ?? ''}-${r.max ?? ''}` === e.target.value
)
if (range) updateFilter({ elapsedMsMin: range.min, elapsedMsMax: range.max })
}
}}
className="rounded border border-border bg-surface px-2 py-1 text-xs"
>
<option value="">{t('metrics.elapsed')}: {t('metrics.allTypes')}</option>
{ELAPSED_RANGES.map((r) => (
<option key={r.label} value={`${r.min ?? ''}-${r.max ?? ''}`}>{r.label}</option>
))}
</select>
{/* Reset */}
<button
type="button"
onClick={resetFilters}
className="rounded border border-border px-2 py-1 text-xs text-muted hover:bg-surface-secondary"
>
{t('metrics.resetFilters')}
</button>
</div>
{/* History Table */}
<DataTable<QueryMetricRow>
columns={historyColumns}
data={historyData?.content ?? []}
keyExtractor={(row) => row.query_id}
pageSize={filter.size ?? 20}
totalElements={historyData?.totalElements}
currentPage={historyData?.currentPage}
onPageChange={(p) => setFilter(prev => ({ ...prev, page: p }))}
/>
</div>
</div>
)

파일 보기

@ -1,4 +1,4 @@
import { useState, useCallback } from 'react'
import { useState } from 'react'
import { usePoller } from '../hooks/usePoller.ts'
import { useCachedState } from '../hooks/useCachedState.ts'
import { useI18n } from '../hooks/useI18n.ts'
@ -10,13 +10,11 @@ import type {
DailyStats,
MetricsSummary,
ProcessingDelay,
QueryMetricsTimeSeries,
RunningJob,
} from '../api/types.ts'
import MetricCard from '../components/charts/MetricCard.tsx'
import StatusBadge from '../components/common/StatusBadge.tsx'
import BarChart from '../components/charts/BarChart.tsx'
import LineChart from '../components/charts/LineChart.tsx'
import TimeRangeSelector from '../components/common/TimeRangeSelector.tsx'
import { formatDuration, formatNumber, formatDateTime, formatPercent } from '../utils/formatters.ts'
@ -30,20 +28,7 @@ export default function Dashboard() {
const [delay, setDelay] = useCachedState<ProcessingDelay | null>('dash.delay', null)
const [daily, setDaily] = useCachedState<DailyStats | null>('dash.daily', null)
const [running, setRunning] = useCachedState<RunningJob[]>('dash.running', [])
const [queryTs, setQueryTs] = useCachedState<QueryMetricsTimeSeries | null>('dash.queryTs', null)
const [days, setDays] = useState(7)
const [clientGroupBy, setClientGroupBy] = useState<'ip' | 'id'>('ip')
const [isQueryChartsOpen, setIsQueryChartsOpen] = useState(() =>
localStorage.getItem('dashboard-query-charts') !== 'collapsed',
)
const toggleQueryCharts = useCallback(() => {
setIsQueryChartsOpen(prev => {
const next = !prev
localStorage.setItem('dashboard-query-charts', next ? 'expanded' : 'collapsed')
return next
})
}, [])
usePoller(() => {
batchApi.getStatistics(days).then(setStats).catch(() => {})
@ -52,8 +37,7 @@ export default function Dashboard() {
monitorApi.getDelay().then(setDelay).catch(() => {})
batchApi.getDailyStats().then(setDaily).catch(() => {})
batchApi.getRunningJobs().then(setRunning).catch(() => {})
monitorApi.getQueryMetricsTimeSeries(days, clientGroupBy).then(setQueryTs).catch(() => {})
}, POLL_INTERVAL, [days, clientGroupBy])
}, POLL_INTERVAL, [days])
const memUsage = metrics
? Math.round((metrics.memory.used / metrics.memory.max) * 100)
@ -230,165 +214,6 @@ export default function Dashboard() {
/>
</div>
)}
{/* Query Performance Charts */}
<div className="sb-card">
<button
type="button"
className="sb-card-header flex w-full items-center justify-between cursor-pointer"
onClick={toggleQueryCharts}
>
<span>{t('dashboard.queryPerformance')}</span>
<svg
className={`h-5 w-5 text-muted transition-transform ${isQueryChartsOpen ? 'rotate-180' : ''}`}
fill="none" viewBox="0 0 24 24" stroke="currentColor"
>
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M19 9l-7 7-7-7" />
</svg>
</button>
{isQueryChartsOpen && (
<div className="space-y-6 pt-2">
{queryTs && queryTs.buckets.length > 0 ? (
<>
{/* Row 1: Response Time + Query Volume */}
<div className="grid gap-4 lg:grid-cols-2">
<div>
<LineChart
label={t('dashboard.responseTimeTrend')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
avg: Math.round(b.avg_elapsed_ms),
max: Math.round(b.max_elapsed_ms),
}))}
series={[
{ dataKey: 'avg', color: 'var(--sb-primary)', name: t('dashboard.avgElapsed') },
{ dataKey: 'max', color: 'var(--sb-danger)', name: t('dashboard.maxElapsed') },
]}
xKey="time"
height={220}
yFormatter={v => `${v}ms`}
/>
</div>
<div>
<BarChart
label={t('dashboard.queryVolume')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
WS: b.ws_count,
REST: b.rest_count,
}))}
xKey="time"
height={220}
series={[
{ dataKey: 'WS', color: 'var(--sb-primary)', name: 'WebSocket', stackId: 'q' },
{ dataKey: 'REST', color: 'var(--sb-success)', name: 'REST', stackId: 'q' },
]}
/>
</div>
</div>
{/* Row 2: Cache Path + Response Size */}
<div className="grid gap-4 lg:grid-cols-2">
<div>
<BarChart
label={t('dashboard.cachePathRatio')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
Cache: b.cache_count,
DB: b.db_count,
Hybrid: b.hybrid_count,
}))}
xKey="time"
height={220}
series={[
{ dataKey: 'Cache', color: 'var(--sb-success)', stackId: 'p' },
{ dataKey: 'DB', color: 'var(--sb-warning)', stackId: 'p' },
{ dataKey: 'Hybrid', color: 'var(--sb-primary)', stackId: 'p' },
]}
/>
</div>
<div>
<LineChart
label={t('dashboard.responseSizeTrend')}
data={queryTs.buckets.map(b => ({
time: formatBucket(b.bucket, queryTs.granularity),
size: Math.round(b.avg_response_bytes / 1024),
}))}
series={[
{ dataKey: 'size', color: 'var(--sb-primary)', name: 'KB' },
]}
xKey="time"
height={220}
yFormatter={v => `${v}KB`}
/>
</div>
</div>
{/* Top Clients */}
<div>
<div className="mb-2 flex items-center gap-2">
<span className="text-sm font-medium text-muted">{t('dashboard.topClients')}</span>
<div className="flex overflow-hidden rounded-md border border-[var(--border-primary)] text-xs">
<button
type="button"
className={`px-2 py-0.5 transition-colors ${clientGroupBy === 'ip' ? 'bg-[var(--accent-primary)] text-white font-medium' : 'bg-[var(--bg-secondary)] text-[var(--text-secondary)] hover:bg-[var(--bg-hover)]'}`}
onClick={() => setClientGroupBy('ip')}
>IP</button>
<button
type="button"
className={`px-2 py-0.5 transition-colors ${clientGroupBy === 'id' ? 'bg-[var(--accent-primary)] text-white font-medium' : 'bg-[var(--bg-secondary)] text-[var(--text-secondary)] hover:bg-[var(--bg-hover)]'}`}
onClick={() => setClientGroupBy('id')}
>ID</button>
</div>
</div>
{queryTs.topClients.length > 0 ? (
<div className="space-y-2">
{queryTs.topClients.map((c, i) => {
const maxCount = queryTs.topClients[0].query_count
const pct = maxCount > 0 ? (c.query_count / maxCount) * 100 : 0
const label = c.client ?? c.client_ip ?? '-'
return (
<div key={label + i} className="flex items-center gap-3 text-sm">
<span className="w-40 truncate font-mono text-xs" title={label}>{label}</span>
<div className="flex-1">
<div className="h-4 rounded bg-surface-hover">
<div
className="h-4 rounded bg-primary"
style={{ width: `${pct}%` }}
/>
</div>
</div>
<span className="w-20 text-right text-xs text-muted">
{c.query_count}{t('dashboard.queries')} · {Math.round(c.avg_elapsed_ms)}ms
</span>
</div>
)
})}
</div>
) : (
<div className="py-4 text-center text-xs text-muted">
{clientGroupBy === 'id' ? '사용자 ID 데이터가 없습니다' : '클라이언트 데이터가 없습니다'}
</div>
)}
</div>
</>
) : (
<div className="py-8 text-center text-sm text-muted">{t('dashboard.noChartData')}</div>
)}
</div>
)}
</div>
</div>
)
}
function formatBucket(bucket: string, granularity: 'HOURLY' | 'DAILY'): string {
if (granularity === 'HOURLY') {
// "2026-03-10T14:00:00" → "14:00"
const timePart = bucket.includes('T') ? bucket.split('T')[1] : bucket
return timePart.slice(0, 5)
}
// "2026-03-10" → "03-10"
return bucket.slice(5, 10)
}

파일 보기

@ -62,9 +62,6 @@ public class DailyAggregationStepConfig {
@Value("${vessel.batch.chunk-size:5000}")
private int chunkSize;
@Value("${vessel.batch.track.include-abnormal-in-tracks:false}")
private boolean includeAbnormalInTracks;
@Bean
public Step mergeDailyTracksStep() {
log.info("Building mergeDailyTracksStep with cache-based in-memory merge");
@ -113,9 +110,7 @@ public class DailyAggregationStepConfig {
return new CompositeTrackWriter(
vesselTrackBulkWriter,
abnormalTrackWriter,
"daily",
null,
includeAbnormalInTracks
"daily"
);
}

파일 보기

@ -69,9 +69,6 @@ public class HourlyAggregationStepConfig {
@Value("${vessel.batch.chunk-size:5000}")
private int chunkSize;
@Value("${vessel.batch.track.include-abnormal-in-tracks:false}")
private boolean includeAbnormalInTracks;
//
// Step 1: 5분 시간 병합 (인메모리 캐시 기반)
//
@ -125,8 +122,7 @@ public class HourlyAggregationStepConfig {
vesselTrackBulkWriter,
abnormalTrackWriter,
"hourly",
hourlyTrackCache,
includeAbnormalInTracks
hourlyTrackCache
);
}

파일 보기

@ -97,10 +97,10 @@ public class VesselBatchScheduler {
}
/**
* S&P AIS API 수집 ( 1분 45초)
* API 서버 데이터 적재 완료 안정 구간(:45초~) 요청
* S&P AIS API 수집 ( 1분 15초)
* 캐시에 최신 위치 저장 5분 집계 Job에서 활용
*/
@Scheduled(cron = "45 * * * * *")
@Scheduled(cron = "15 * * * * *")
public void runAisTargetImport() {
if (!schedulerEnabled || shutdownRequested || aisTargetImportJob == null) {
return;

파일 보기

@ -96,9 +96,6 @@ public class VesselTrackStepConfig {
@Value("${vessel.batch.chunk-size:1000}")
private int chunkSize;
@Value("${vessel.batch.track.include-abnormal-in-tracks:false}")
private boolean includeAbnormalInTracks;
@PostConstruct
public void init() {
// 5분 Job의 이름을 명시적으로 설정
@ -206,21 +203,18 @@ public class VesselTrackStepConfig {
log.warn("비정상 궤적 감지 [{}]: vessel={}, avg_speed={}, distance={}",
abnormalReason, track.getVesselKey(), track.getAvgSpeed(), track.getDistanceNm());
saveAbnormalTrack(track, abnormalReason);
if (includeAbnormalInTracks) {
filteredTracks.add(track); // 플래그 true 정상 테이블+캐시에도 포함
}
} else {
filteredTracks.add(track);
}
// 궤적의 종료 위치 저장 (캐시 업데이트용) 비정상 포함 시에도 위치 추적
if (filteredTracks.contains(track) && track.getEndPosition() != null) {
currentBucketEndPositions.put(track.getMmsi(), VesselBucketPositionDto.builder()
.mmsi(track.getMmsi())
.endLon(track.getEndPosition().getLon())
.endLat(track.getEndPosition().getLat())
.endTime(track.getEndPosition().getTime())
.build());
// 정상 궤적의 종료 위치 저장 (캐시 업데이트용)
if (track.getEndPosition() != null) {
currentBucketEndPositions.put(track.getMmsi(), VesselBucketPositionDto.builder()
.mmsi(track.getMmsi())
.endLon(track.getEndPosition().getLon())
.endLat(track.getEndPosition().getLat())
.endTime(track.getEndPosition().getTime())
.build());
}
}
}

파일 보기

@ -46,7 +46,7 @@ public class AisTargetCacheManager {
@Value("${app.cache.ais-target.ttl-minutes:120}")
private long ttlMinutes;
@Value("${app.cache.ais-target.max-size:500000}")
@Value("${app.cache.ais-target.max-size:300000}")
private int maxSize;
@PostConstruct

파일 보기

@ -107,7 +107,7 @@ public class ChnPrmShipCacheWarmer implements ApplicationRunner {
entities.forEach(entity -> {
if (entity.getSignalKindCode() == null) {
SignalKindCode kindCode = SignalKindCode.resolve(
entity.getVesselType(), entity.getExtraInfo(), entity.getName());
entity.getVesselType(), entity.getExtraInfo());
entity.setSignalKindCode(kindCode.getCode());
}
});

파일 보기

@ -60,7 +60,7 @@ public class FiveMinTrackCache {
for (VesselTrack track : tracks) {
put(track);
}
log.debug("[CACHE-MONITOR] L1.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]",
log.info("[CACHE-MONITOR] L1.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]",
tracks.size(), beforeSize, cache.estimatedSize(), getStats());
}
@ -89,55 +89,11 @@ public class FiveMinTrackCache {
}
int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.debug("[CACHE-MONITOR] L1.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}",
log.info("[CACHE-MONITOR] L1.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}",
start, end, result.size(), totalTracks, cache.estimatedSize());
return result;
}
/**
* 요청된 MMSI 키로 직접 O(1) 조회 mmsi×5minBucket 조합으로 Caffeine getIfPresent() 호출
* 기존 getTracksInRange() 전체 스캔(O(n)) 대비 대폭 성능 개선.
* : 1시간 × 100 MMSI = 1,200회 get() vs 최대 1.5M 엔트리 스캔
*/
public Map<String, List<VesselTrack>> getTracksForVessels(
LocalDateTime start, LocalDateTime end, Set<String> mmsiKeys) {
if (mmsiKeys == null || mmsiKeys.isEmpty()) {
return Collections.emptyMap();
}
Map<String, List<VesselTrack>> result = new LinkedHashMap<>();
// 5분 단위 버킷 정렬 (start를 가장 가까운 5분 바닥으로 정렬)
int startMinute = (start.getMinute() / 5) * 5;
LocalDateTime bucket = start.withMinute(startMinute).withSecond(0).withNano(0);
int lookupCount = 0;
int hitCount = 0;
while (!bucket.isAfter(end) && bucket.isBefore(end)) {
for (String mmsi : mmsiKeys) {
String key = buildKey(mmsi, bucket);
VesselTrack track = cache.getIfPresent(key);
lookupCount++;
if (track != null) {
result.computeIfAbsent(mmsi, k -> new ArrayList<>()).add(track);
hitCount++;
}
}
bucket = bucket.plusMinutes(5);
}
// MMSI별 시간순 정렬
for (List<VesselTrack> tracks : result.values()) {
tracks.sort(Comparator.comparing(VesselTrack::getTimeBucket));
}
int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.debug("[CACHE-MONITOR] L1.getTracksForVessels [{}, {}): requestedMmsi={}, lookups={}, hits={}, resultMmsi={}, tracks={}",
start, end, mmsiKeys.size(), lookupCount, hitCount, result.size(), totalTracks);
return result;
}
/**
* 지정 시간 범위의 캐시 항목 제거 (hourly merge 완료 호출)
*/

파일 보기

@ -11,7 +11,6 @@ import org.springframework.stereotype.Component;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
/**
@ -32,9 +31,6 @@ public class HourlyTrackCache {
private Cache<String, VesselTrack> cache;
// 간소화 완료 추적 (시간 버킷 단위, 중복 간소화 방지)
private final Set<LocalDateTime> simplifiedBuckets = ConcurrentHashMap.newKeySet();
@Value("${app.cache.hourly-track.ttl-hours:26}")
private long ttlHours;
@ -64,7 +60,7 @@ public class HourlyTrackCache {
for (VesselTrack track : tracks) {
put(track);
}
log.debug("[CACHE-MONITOR] L2.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]",
log.info("[CACHE-MONITOR] L2.putAll: input={}, cacheBefore={}, cacheAfter={}, stats=[{}]",
tracks.size(), beforeSize, cache.estimatedSize(), getStats());
}
@ -92,52 +88,11 @@ public class HourlyTrackCache {
}
int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.debug("[CACHE-MONITOR] L2.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}",
log.info("[CACHE-MONITOR] L2.getTracksInRange [{}, {}): mmsi={}, tracks={}, cacheTotal={}",
start, end, result.size(), totalTracks, cache.estimatedSize());
return result;
}
/**
* 요청된 MMSI 키로 직접 O(1) 조회 mmsi×hourBucket 조합으로 Caffeine getIfPresent() 호출
* 기존 getTracksInRange() 전체 스캔(O(n)) 대비 대폭 성능 개선.
* : 24시간 × 100 MMSI = 2,400회 get() vs 최대 7M 엔트리 스캔
*/
public Map<String, List<VesselTrack>> getTracksForVessels(
LocalDateTime start, LocalDateTime end, Set<String> mmsiKeys) {
if (mmsiKeys == null || mmsiKeys.isEmpty()) {
return Collections.emptyMap();
}
Map<String, List<VesselTrack>> result = new LinkedHashMap<>();
LocalDateTime bucket = start.withMinute(0).withSecond(0).withNano(0);
int lookupCount = 0;
int hitCount = 0;
while (!bucket.isAfter(end) && bucket.isBefore(end)) {
for (String mmsi : mmsiKeys) {
String key = buildKey(mmsi, bucket);
VesselTrack track = cache.getIfPresent(key);
lookupCount++;
if (track != null) {
result.computeIfAbsent(mmsi, k -> new ArrayList<>()).add(track);
hitCount++;
}
}
bucket = bucket.plusHours(1);
}
// MMSI별 시간순 정렬
for (List<VesselTrack> tracks : result.values()) {
tracks.sort(Comparator.comparing(VesselTrack::getTimeBucket));
}
int totalTracks = result.values().stream().mapToInt(List::size).sum();
log.debug("[CACHE-MONITOR] L2.getTracksForVessels [{}, {}): requestedMmsi={}, lookups={}, hits={}, resultMmsi={}, tracks={}",
start, end, mmsiKeys.size(), lookupCount, hitCount, result.size(), totalTracks);
return result;
}
/**
* 지정 시간 범위의 캐시 항목 제거 (daily merge 완료 호출)
*/
@ -154,74 +109,6 @@ public class HourlyTrackCache {
start, end, before - after, before, after, getStats());
}
/**
* 6시간 이상 경과한 캐시 엔트리의 WKT LineStringM을 간소화.
* sampleRate번째 포인트만 유지 (/마지막 항상 보존).
* 이미 간소화된 시간 버킷은 스킵하여 중복 간소화 방지.
*
* @param hoursAgo 간소화 대상 경과 시간 ()
* @param sampleRate 샘플링 비율 (2 = 2번째 포인트만 유지 ~50% 감소)
* @return 간소화된 엔트리
*/
public int simplifyOlderThan(int hoursAgo, int sampleRate) {
LocalDateTime threshold = LocalDateTime.now().minusHours(hoursAgo);
int simplified = 0;
int totalOriginal = 0;
int totalAfter = 0;
int skipped = 0;
for (Map.Entry<String, VesselTrack> entry : cache.asMap().entrySet()) {
VesselTrack track = entry.getValue();
if (track.getTimeBucket() == null || !track.getTimeBucket().isBefore(threshold)) {
continue;
}
// 이미 간소화된 시간 버킷이면 스킵
if (simplifiedBuckets.contains(track.getTimeBucket())) {
skipped++;
continue;
}
String wkt = track.getTrackGeom();
if (wkt == null || track.getPointCount() == null || track.getPointCount() <= 3) {
continue;
}
int originalCount = track.getPointCount();
String simplifiedWkt = simplifyLineStringM(wkt, sampleRate);
if (simplifiedWkt != null && !simplifiedWkt.equals(wkt)) {
track.setTrackGeom(simplifiedWkt);
int newCount = countWktPoints(simplifiedWkt);
totalOriginal += originalCount;
totalAfter += newCount;
track.setPointCount(newCount);
simplified++;
}
}
// 간소화 완료된 시간 버킷 기록 (threshold 이전 모든 정각 버킷)
LocalDateTime bucket = threshold.withMinute(0).withSecond(0).withNano(0);
LocalDateTime oldest = LocalDateTime.now().minusHours(ttlHours + 1);
while (!bucket.isBefore(oldest)) {
simplifiedBuckets.add(bucket);
bucket = bucket.minusHours(1);
}
// 만료된 버킷 추적 정리
simplifiedBuckets.removeIf(b -> b.isBefore(oldest));
if (simplified > 0) {
double reduction = totalOriginal > 0 ? (1 - (double) totalAfter / totalOriginal) * 100 : 0;
log.info("[CACHE-SIMPLIFY] L2 간소화: entries={}, skipped={}, points {} -> {} ({}% 감소), threshold={}h",
simplified, skipped, totalOriginal, totalAfter,
String.format("%.1f", reduction), hoursAgo);
} else {
log.debug("[CACHE-SIMPLIFY] L2 간소화 대상 없음: skipped={}, threshold={}h", skipped, hoursAgo);
}
return simplified;
}
public long size() {
return cache.estimatedSize();
}
@ -249,48 +136,4 @@ public class HourlyTrackCache {
private String buildKey(String mmsi, LocalDateTime timeBucket) {
return mmsi + "::" + timeBucket.format(KEY_FORMATTER);
}
/**
* WKT LineStringM에서 sampleRate번째 포인트만 유지.
* 포인트와 마지막 포인트는 항상 보존.
*
* 입력 형식: "LINESTRING M(lon1 lat1 m1,lon2 lat2 m2,...)"
* 또는 "LINESTRINGM(lon1 lat1 m1,lon2 lat2 m2,...)"
*/
static String simplifyLineStringM(String wkt, int sampleRate) {
if (wkt == null || sampleRate <= 1) return wkt;
int openParen = wkt.indexOf('(');
int closeParen = wkt.lastIndexOf(')');
if (openParen < 0 || closeParen < 0 || closeParen <= openParen + 1) return wkt;
String prefix = wkt.substring(0, openParen + 1);
String coords = wkt.substring(openParen + 1, closeParen);
String[] points = coords.split(",");
if (points.length <= 3) return wkt;
StringBuilder sb = new StringBuilder(prefix);
for (int i = 0; i < points.length; i++) {
if (i == 0 || i == points.length - 1 || i % sampleRate == 0) {
if (sb.length() > prefix.length()) {
sb.append(',');
}
sb.append(points[i]);
}
}
sb.append(')');
return sb.toString();
}
static int countWktPoints(String wkt) {
if (wkt == null) return 0;
int openParen = wkt.indexOf('(');
int closeParen = wkt.lastIndexOf(')');
if (openParen < 0 || closeParen < 0 || closeParen <= openParen + 1) return 0;
String coords = wkt.substring(openParen + 1, closeParen);
if (coords.isBlank()) return 0;
return coords.split(",").length;
}
}

파일 보기

@ -1,46 +0,0 @@
package gc.mda.signal_batch.batch.reader;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
/**
* L2 HourlyTrackCache 간소화 스케줄러
*
* 6시간 이상 경과한 캐시 엔트리의 WKT LineStringM을 Nth-point 샘플링으로 간소화.
* 기본 스케줄: 06:30, 12:30, 18:30 (1일 3회)
*
* 간소화 효과: sampleRate=2 기준 ~50% 포인트 감소 L2 메모리 절약
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.cache.hourly-simplification.enabled", havingValue = "true")
public class HourlyTrackSimplifier {
private final HourlyTrackCache hourlyTrackCache;
@Value("${vessel.batch.cache.hourly-simplification.hours-ago:6}")
private int hoursAgo;
@Value("${vessel.batch.cache.hourly-simplification.sample-rate:2}")
private int sampleRate;
public HourlyTrackSimplifier(HourlyTrackCache hourlyTrackCache) {
this.hourlyTrackCache = hourlyTrackCache;
}
@Scheduled(cron = "${vessel.batch.cache.hourly-simplification.cron:0 30 6,12,18 * * *}")
public void scheduledSimplification() {
log.info("[HourlySimplifier] 스케줄 간소화 시작 — hoursAgo={}, sampleRate={}, cacheSize={}",
hoursAgo, sampleRate, hourlyTrackCache.size());
long start = System.currentTimeMillis();
int simplified = hourlyTrackCache.simplifyOlderThan(hoursAgo, sampleRate);
long elapsed = System.currentTimeMillis() - start;
log.info("[HourlySimplifier] 스케줄 간소화 완료 — simplified={}, elapsed={}ms, cacheSize={}",
simplified, elapsed, hourlyTrackCache.size());
}
}

파일 보기

@ -35,10 +35,9 @@ public class AisTargetCacheWriter implements ItemWriter<AisTargetEntity> {
List<? extends AisTargetEntity> items = chunk.getItems();
log.debug("AIS Target 캐시 업데이트 시작: {} 건", items.size());
// 1. SignalKindCode 치환 (vesselType + extraInfo + shipName 기반, 캐시 저장 1회만)
// 1. SignalKindCode 치환
items.forEach(item -> {
SignalKindCode kindCode = SignalKindCode.resolve(
item.getVesselType(), item.getExtraInfo(), item.getName());
SignalKindCode kindCode = SignalKindCode.resolve(item.getVesselType(), item.getExtraInfo());
item.setSignalKindCode(kindCode.getCode());
});

파일 보기

@ -25,24 +25,21 @@ public class CompositeTrackWriter implements ItemWriter<AbnormalDetectionResult>
private final AbnormalTrackWriter abnormalTrackWriter;
private final String targetTable;
private final HourlyTrackCache hourlyTrackCache; // nullable (daily writer는 미사용)
private final boolean includeAbnormalInTracks;
public CompositeTrackWriter(VesselTrackBulkWriter vesselTrackBulkWriter,
AbnormalTrackWriter abnormalTrackWriter,
String targetTable,
HourlyTrackCache hourlyTrackCache,
boolean includeAbnormalInTracks) {
HourlyTrackCache hourlyTrackCache) {
this.vesselTrackBulkWriter = vesselTrackBulkWriter;
this.abnormalTrackWriter = abnormalTrackWriter;
this.targetTable = targetTable;
this.hourlyTrackCache = hourlyTrackCache;
this.includeAbnormalInTracks = includeAbnormalInTracks;
}
public CompositeTrackWriter(VesselTrackBulkWriter vesselTrackBulkWriter,
AbnormalTrackWriter abnormalTrackWriter,
String targetTable) {
this(vesselTrackBulkWriter, abnormalTrackWriter, targetTable, null, false);
this(vesselTrackBulkWriter, abnormalTrackWriter, targetTable, null);
}
@BeforeStep
@ -69,11 +66,9 @@ public class CompositeTrackWriter implements ItemWriter<AbnormalDetectionResult>
abnormalResults.add(result);
// 정정된 궤적이 있으면 정상 궤적으로 저장
// null이면 전체 궤적이 비정상이므로 제외 (플래그 true면 원본 포함)
// null이면 전체 궤적이 비정상이므로 제외
if (result.getCorrectedTrack() != null) {
normalTracks.add(result.getCorrectedTrack());
} else if (includeAbnormalInTracks) {
normalTracks.add(result.getOriginalTrack());
} else {
log.debug("비정상 궤적 전체 제외: vessel={}",
result.getOriginalTrack().getVesselKey());
@ -91,7 +86,7 @@ public class CompositeTrackWriter implements ItemWriter<AbnormalDetectionResult>
if (hourlyTrackCache != null) {
long l2Before = hourlyTrackCache.size();
hourlyTrackCache.putAll(normalTracks);
log.debug("[CACHE-MONITOR] CompositeTrackWriter → L2.putAll: tracks={}, L2 before={}, after={}",
log.info("[CACHE-MONITOR] CompositeTrackWriter → L2.putAll: tracks={}, L2 before={}, after={}",
normalTracks.size(), l2Before, hourlyTrackCache.size());
}
} else if ("daily".equals(targetTable)) {

파일 보기

@ -6,7 +6,6 @@ import gc.mda.signal_batch.domain.gis.dto.VesselContactRequest;
import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse;
import gc.mda.signal_batch.domain.gis.service.AreaSearchService;
import gc.mda.signal_batch.domain.gis.service.VesselContactService;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.ExampleObject;
@ -220,11 +219,4 @@ public class AreaSearchController {
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.body(Map.of("error", e.getMessage()));
}
@ExceptionHandler(QueryTimeoutException.class)
public ResponseEntity<Map<String, String>> handleQueryTimeout(QueryTimeoutException e) {
log.warn("Area search query timeout: {}", e.getMessage());
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.body(Map.of("error", e.getMessage()));
}
}

파일 보기

@ -6,11 +6,8 @@ import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailRequest;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailResponse;
import gc.mda.signal_batch.domain.gis.service.GisService;
import gc.mda.signal_batch.domain.vessel.service.VesselPositionService;
import gc.mda.signal_batch.domain.vessel.service.VesselPositionDetailService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.tags.Tag;
@ -31,7 +28,6 @@ public class GisController {
private final GisService gisService;
private final VesselPositionService vesselPositionService;
private final VesselPositionDetailService vesselPositionDetailService;
@GetMapping("/haegu/boundaries")
@Operation(summary = "해구 경계 조회", description = "모든 해구의 경계 정보를 GeoJSON 형식으로 반환")
@ -101,20 +97,4 @@ public class GisController {
return vesselPositionService.getRecentVesselPositions(minutes);
}
@PostMapping("/vessels/recent-positions-detail")
@Operation(
summary = "최근 위치 상세 조회 (공간 필터 지원)",
description = "AIS 캐시에서 지정 시간 내 선박의 상세 정보를 공간 필터(폴리곤/원)와 함께 조회합니다. "
+ "coordinates(폴리곤)와 center+radiusNm(원) 중 하나를 지정하거나, 둘 다 생략하면 전체 조회합니다."
)
public List<RecentPositionDetailResponse> getRecentPositionsDetail(
@RequestBody RecentPositionDetailRequest request) {
if (request.getMinutes() <= 0 || request.getMinutes() > 1440) {
throw new IllegalArgumentException("Minutes must be between 1 and 1440");
}
return vesselPositionDetailService.getRecentPositionsDetail(request);
}
}

파일 보기

@ -18,7 +18,6 @@ import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.responses.ApiResponses;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.*;
@ -189,22 +188,8 @@ public class GisControllerV2 {
required = true,
content = @Content(schema = @Schema(implementation = VesselTracksRequest.class))
)
@RequestBody VesselTracksRequest request,
HttpServletRequest httpRequest) {
return gisServiceV2.getVesselTracksV2(request, getClientIp(httpRequest), getClientId(httpRequest));
}
private String getClientId(HttpServletRequest request) {
return gc.mda.signal_batch.global.config.WebSocketStompConfig.extractClientIdFromRequest(request);
}
private String getClientIp(HttpServletRequest request) {
String[] headers = {"X-Forwarded-For", "X-Original-Forwarded-For", "X-Real-IP"};
for (String header : headers) {
String ip = request.getHeader(header);
if (ip != null && !ip.isBlank()) return ip.split(",")[0].trim();
}
return request.getRemoteAddr();
@RequestBody VesselTracksRequest request) {
return gisServiceV2.getVesselTracksV2(request);
}
@GetMapping("/vessels/recent-positions")

파일 보기

@ -42,10 +42,6 @@ public class AreaSearchRequest {
@Schema(description = "탐색 대상 폴리곤 영역 목록 (1~10개)", requiredMode = Schema.RequiredMode.REQUIRED)
private List<SearchPolygon> polygons;
@Schema(description = "true 시 중국허가선박(~1,400척)만 분석 대상으로 필터링", example = "false")
@Builder.Default
private boolean chnPrmShipOnly = false;
@Schema(description = "검색 모드 (폴리곤이 2개 이상일 때 적용)")
public enum SearchMode {
@Schema(description = "합집합: 어느 한 영역이라도 통과한 선박")

파일 보기

@ -47,10 +47,6 @@ public class VesselContactRequest {
@Schema(description = "최대 접촉 판정 거리 (미터, 50~5000)", example = "1000", requiredMode = Schema.RequiredMode.REQUIRED)
private Double maxContactDistanceMeters;
@Schema(description = "true 시 중국허가선박만 대상으로 접촉 분석", example = "false")
@Builder.Default
private boolean chnPrmShipOnly = false;
@Data
@Builder
@NoArgsConstructor

파일 보기

@ -16,10 +16,10 @@ import java.util.List;
@Schema(description = "비정상 접촉 선박 탐색 응답")
public class VesselContactResponse {
@Schema(description = "접촉 선박 쌍 목록 — 동일 선박 쌍이 시간 갭(20분 이상)으로 분리된 여러 접촉 세그먼트를 가질 수 있음")
@Schema(description = "접촉 선박 쌍 목록")
private List<VesselContactPair> contacts;
@Schema(description = "관련 선박의 전체 기간 항적 — 선박당 1건으로 중복 제거됨 (CompactVesselTrack)")
@Schema(description = "관련 선박의 전체 기간 항적 (CompactVesselTrack)")
private List<CompactVesselTrack> tracks;
@Schema(description = "탐색 요약 정보")

파일 보기

@ -6,14 +6,9 @@ import gc.mda.signal_batch.domain.gis.dto.AreaSearchRequest.SearchPolygon;
import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse;
import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse.AreaSearchSummary;
import gc.mda.signal_batch.domain.gis.dto.AreaSearchResponse.PolygonHitDetail;
import gc.mda.signal_batch.batch.reader.ChnPrmShipProperties;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.TrackMemoryEstimator;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager.DailyTrackData;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*;
@ -33,9 +28,6 @@ import java.util.stream.Collectors;
public class AreaSearchService {
private final DailyTrackCacheManager cacheManager;
private final ActiveQueryManager activeQueryManager;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final ChnPrmShipProperties chnPrmShipProperties;
private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory();
/**
@ -53,115 +45,82 @@ public class AreaSearchService {
return buildEmptyResponse(request, startMs);
}
// 3. 동시성·메모리 관리 (데이터 로딩 슬롯/예산 확보)
String queryId = "area-search-" + Long.toHexString(System.nanoTime());
boolean slotAcquired = false, memoryReserved = false;
try {
if (!activeQueryManager.tryAcquireQuerySlotImmediate(queryId)) {
if (!activeQueryManager.tryAcquireQuerySlot(queryId)) {
throw new QueryTimeoutException("서버 과부하: area-search 슬롯 대기 타임아웃");
}
}
slotAcquired = true;
long estimatedBytes = TrackMemoryEstimator.estimateQueryBytes(targetDates.size(), 2000);
memoryBudgetManager.reserveQueryMemory(queryId, estimatedBytes, 30_000L);
memoryReserved = true;
// 4. 다일 데이터 선박별 단일 트랙 병합
Map<String, CompactVesselTrack> mergedTracks = mergeMultipleDays(targetDates);
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, startMs);
}
// 4-1. ChnPrmShip 필터링
if (request.isChnPrmShipOnly()) {
int totalBefore = mergedTracks.size();
Set<String> chnPrmMmsiSet = chnPrmShipProperties.getMmsiSet();
mergedTracks.entrySet().removeIf(e -> !chnPrmMmsiSet.contains(e.getKey()));
log.debug("ChnPrmShip 필터 적용: {} → {} 선박", totalBefore, mergedTracks.size());
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, startMs);
}
}
// 5. 좌표 JTS Polygon 변환
List<Polygon> jtsPolygons = convertToJtsPolygons(request.getPolygons());
// 6. 병합된 트랙으로 STRtree 빌드
STRtree spatialIndex = buildSpatialIndex(mergedTracks);
// 7. 폴리곤별 히트 선박 + 개별 방문(trip) 수집
List<Map<String, List<PolygonHitDetail>>> perPolygonHits = new ArrayList<>();
for (int i = 0; i < jtsPolygons.size(); i++) {
Polygon polygon = jtsPolygons.get(i);
SearchPolygon searchPolygon = request.getPolygons().get(i);
Map<String, List<PolygonHitDetail>> hits = findHitsForPolygon(
polygon, searchPolygon, mergedTracks, spatialIndex);
perPolygonHits.add(hits);
}
// 8. 모드별 결과 합산
SearchMode mode = request.getPolygons().size() == 1 ? SearchMode.ANY : request.getMode();
Map<String, List<PolygonHitDetail>> resultHits;
switch (mode) {
case ALL:
resultHits = processAllMode(perPolygonHits);
break;
case SEQUENTIAL:
resultHits = processSequentialMode(perPolygonHits);
break;
default:
resultHits = processAnyMode(perPolygonHits);
break;
}
// 9. 결과 선박의 전체 기간 트랙 + 히트 메타 반환
List<CompactVesselTrack> resultTracks = resultHits.keySet().stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long totalPoints = resultHits.values().stream()
.flatMap(Collection::stream)
.mapToLong(h -> h.getHitPointCount() != null ? h.getHitPointCount() : 0)
.sum();
int totalCachedVessels = targetDates.stream()
.mapToInt(d -> {
DailyTrackData data = cacheManager.getDailyTrackData(d);
return data != null ? data.getVesselCount() : 0;
})
.sum();
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Area search completed: mode={}, polygons={}, hitVessels={}, totalPoints={}, chnPrmOnly={}, elapsed={}ms",
mode, request.getPolygons().size(), resultHits.size(), totalPoints, request.isChnPrmShipOnly(), elapsedMs);
return AreaSearchResponse.builder()
.tracks(resultTracks)
.hitDetails(resultHits)
.summary(AreaSearchSummary.builder()
.totalVessels(resultHits.size())
.totalPoints(totalPoints)
.mode(mode)
.polygonIds(request.getPolygons().stream()
.map(SearchPolygon::getId)
.collect(Collectors.toList()))
.processingTimeMs(elapsedMs)
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.totalCachedVessels(totalCachedVessels)
.build())
.build();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new QueryTimeoutException("area-search 슬롯 대기 중 인터럽트");
} finally {
if (memoryReserved) memoryBudgetManager.releaseQueryMemory(queryId);
if (slotAcquired) activeQueryManager.releaseQuerySlot(queryId);
// 3. 다일 데이터 선박별 단일 트랙 병합
Map<String, CompactVesselTrack> mergedTracks = mergeMultipleDays(targetDates);
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, startMs);
}
// 4. 좌표 JTS Polygon 변환
List<Polygon> jtsPolygons = convertToJtsPolygons(request.getPolygons());
// 5. 병합된 트랙으로 STRtree 빌드
STRtree spatialIndex = buildSpatialIndex(mergedTracks);
// 6. 폴리곤별 히트 선박 + 개별 방문(trip) 수집
List<Map<String, List<PolygonHitDetail>>> perPolygonHits = new ArrayList<>();
for (int i = 0; i < jtsPolygons.size(); i++) {
Polygon polygon = jtsPolygons.get(i);
SearchPolygon searchPolygon = request.getPolygons().get(i);
Map<String, List<PolygonHitDetail>> hits = findHitsForPolygon(
polygon, searchPolygon, mergedTracks, spatialIndex);
perPolygonHits.add(hits);
}
// 7. 모드별 결과 합산
SearchMode mode = request.getPolygons().size() == 1 ? SearchMode.ANY : request.getMode();
Map<String, List<PolygonHitDetail>> resultHits;
switch (mode) {
case ALL:
resultHits = processAllMode(perPolygonHits);
break;
case SEQUENTIAL:
resultHits = processSequentialMode(perPolygonHits);
break;
default:
resultHits = processAnyMode(perPolygonHits);
break;
}
// 8. 결과 선박의 전체 기간 트랙 + 히트 메타 반환
List<CompactVesselTrack> resultTracks = resultHits.keySet().stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long totalPoints = resultHits.values().stream()
.flatMap(Collection::stream)
.mapToLong(h -> h.getHitPointCount() != null ? h.getHitPointCount() : 0)
.sum();
int totalCachedVessels = targetDates.stream()
.mapToInt(d -> {
DailyTrackData data = cacheManager.getDailyTrackData(d);
return data != null ? data.getVesselCount() : 0;
})
.sum();
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Area search completed: mode={}, polygons={}, hitVessels={}, totalPoints={}, elapsed={}ms",
mode, request.getPolygons().size(), resultHits.size(), totalPoints, elapsedMs);
return AreaSearchResponse.builder()
.tracks(resultTracks)
.hitDetails(resultHits)
.summary(AreaSearchSummary.builder()
.totalVessels(resultHits.size())
.totalPoints(totalPoints)
.mode(mode)
.polygonIds(request.getPolygons().stream()
.map(SearchPolygon::getId)
.collect(Collectors.toList()))
.processingTimeMs(elapsedMs)
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.totalCachedVessels(totalCachedVessels)
.build())
.build();
}
// 입력 검증
@ -285,11 +244,9 @@ public class AreaSearchService {
// 여러 날짜 병합
CompactVesselTrack first = trackList.get(0);
int totalPoints = trackList.stream()
.mapToInt(t -> t.getPointCount() != null ? t.getPointCount() : 0).sum();
List<double[]> geo = new ArrayList<>(totalPoints);
List<String> ts = new ArrayList<>(totalPoints);
List<Double> sp = new ArrayList<>(totalPoints);
List<double[]> geo = new ArrayList<>();
List<String> ts = new ArrayList<>();
List<Double> sp = new ArrayList<>();
double totalDist = 0;
double maxSpeed = 0;
int pointCount = 0;
@ -390,13 +347,10 @@ public class AreaSearchService {
long currentExit = 0;
int currentHitCount = 0;
int visitIndex = 0;
Coordinate reusable = new Coordinate();
for (int i = 0; i < geometry.size(); i++) {
double[] coord = geometry.get(i);
reusable.x = coord[0];
reusable.y = coord[1];
Point point = GEOMETRY_FACTORY.createPoint(reusable);
Point point = GEOMETRY_FACTORY.createPoint(new Coordinate(coord[0], coord[1]));
boolean isInside = prepared.contains(point);
if (isInside) {
@ -484,7 +438,6 @@ public class AreaSearchService {
try {
return Long.parseLong(timestamps.get(index));
} catch (NumberFormatException e) {
log.warn("Invalid timestamp at index {}: {}", index, timestamps.get(index));
return 0L;
}
}

파일 보기

@ -5,6 +5,7 @@ import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselStatsResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.global.util.SignalKindCode;
import gc.mda.signal_batch.global.util.TrackSimplificationUtils;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
@ -603,11 +604,9 @@ public class GisService {
Map<String, String> vesselInfo = getVesselInfo(mmsi);
String shipName = vesselInfo.get("ship_name");
String shipType = vesselInfo.get("ship_type");
String signalKindCode = vesselInfo.get("signal_kind_code");
String nationalCode = (mmsi != null && mmsi.length() >= 3) ? mmsi.substring(0, 3) : null;
String shipKindCode = (signalKindCode != null && !signalKindCode.isEmpty())
? signalKindCode : "000027";
String shipKindCode = SignalKindCode.resolve(shipType, null).getCode();
return CompactVesselTrack.builder()
.vesselId(mmsi)
@ -629,7 +628,7 @@ public class GisService {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
try {
String sql = """
SELECT ship_nm as ship_name, vessel_type as ship_type, signal_kind_code
SELECT ship_nm as ship_name, vessel_type as ship_type
FROM signal.t_ais_position
WHERE mmsi = ?
LIMIT 1

파일 보기

@ -9,17 +9,12 @@ import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.model.VesselTrack;
import gc.mda.signal_batch.global.exception.MemoryBudgetExceededException;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.TrackConverter;
import gc.mda.signal_batch.global.util.TrackMemoryEstimator;
import gc.mda.signal_batch.global.util.VesselTrackToCompactConverter;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.CacheTrackSimplifier;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import gc.mda.signal_batch.monitoring.service.QueryMetricsBufferService;
import gc.mda.signal_batch.monitoring.service.QueryMetricsService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
@ -57,8 +52,6 @@ public class GisServiceV2 {
private final VesselTrackToCompactConverter vesselTrackToCompactConverter;
private final ChnPrmShipCacheManager chnPrmShipCacheManager;
private final ChnPrmShipProperties chnPrmShipProperties;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final QueryMetricsBufferService queryMetricsBufferService;
@Value("${rest.v2.query.timeout-seconds:30}")
private int restQueryTimeout;
@ -79,9 +72,7 @@ public class GisServiceV2 {
FiveMinTrackCache fiveMinTrackCache,
VesselTrackToCompactConverter vesselTrackToCompactConverter,
ChnPrmShipCacheManager chnPrmShipCacheManager,
ChnPrmShipProperties chnPrmShipProperties,
TrackMemoryBudgetManager memoryBudgetManager,
QueryMetricsBufferService queryMetricsBufferService) {
ChnPrmShipProperties chnPrmShipProperties) {
this.queryDataSource = queryDataSource;
this.activeQueryManager = activeQueryManager;
this.dailyTrackCacheManager = dailyTrackCacheManager;
@ -92,8 +83,6 @@ public class GisServiceV2 {
this.vesselTrackToCompactConverter = vesselTrackToCompactConverter;
this.chnPrmShipCacheManager = chnPrmShipCacheManager;
this.chnPrmShipProperties = chnPrmShipProperties;
this.memoryBudgetManager = memoryBudgetManager;
this.queryMetricsBufferService = queryMetricsBufferService;
}
/**
@ -285,28 +274,13 @@ public class GisServiceV2 {
/**
* 선박별 항적 조회 V2 (캐시 + Semaphore + 간소화 + ChnPrmShip enrichment)
*/
public List<CompactVesselTrack> getVesselTracksV2(VesselTracksRequest request, String clientIp, String clientId) {
public List<CompactVesselTrack> getVesselTracksV2(VesselTracksRequest request) {
String queryId = "rest-vessels-" + UUID.randomUUID().toString().substring(0, 8);
long startMs = System.currentTimeMillis();
boolean slotAcquired = false;
boolean memoryReserved = false;
try {
slotAcquired = acquireSlotWithWait(queryId);
// 쿼리 메모리 사전 예약
int days = (int) java.time.Duration.between(request.getStartTime(), request.getEndTime()).toDays() + 1;
long estimatedBytes = TrackMemoryEstimator.estimateQueryBytes(days, request.getVessels().size());
try {
memoryBudgetManager.reserveQueryMemory(queryId, estimatedBytes,
memoryBudgetManager.getProperties().getQueueTimeoutSeconds() * 1000L);
memoryReserved = true;
} catch (MemoryBudgetExceededException e) {
log.warn("[MemoryBudget] REST 쿼리 메모리 예약 실패: queryId={}, estimated={}MB — {}",
queryId, estimatedBytes / (1024 * 1024), e.getMessage());
throw e;
}
List<CompactVesselTrack> result;
if (dailyTrackCacheManager.isEnabled() &&
@ -329,14 +303,9 @@ public class GisServiceV2 {
result.size(), request.getVessels().size(),
dailyTrackCacheManager.isEnabled(), request.isIncludeChnPrmShip());
enqueueRestMetric(queryId, request, result, startMs, clientIp, clientId);
return result;
} finally {
if (memoryReserved) {
memoryBudgetManager.releaseQueryMemory(queryId);
}
if (slotAcquired) {
activeQueryManager.releaseQuerySlot(queryId);
if (activeQueryManager.isHeapPressureHigh()) {
@ -346,34 +315,6 @@ public class GisServiceV2 {
}
}
private void enqueueRestMetric(String queryId, VesselTracksRequest request,
List<CompactVesselTrack> result, long startMs, String clientIp, String clientId) {
try {
int totalPoints = result.stream().mapToInt(CompactVesselTrack::getPointCount).sum();
long responseBytes = (long) result.size() * 200 + (long) totalPoints * 40;
queryMetricsBufferService.enqueue(QueryMetricsService.QueryMetric.builder()
.queryId(queryId)
.queryType("REST_V2")
.startTime(request.getStartTime())
.endTime(request.getEndTime())
.requestedMmsi(request.getVessels().size())
.dataPath(dailyTrackCacheManager.isEnabled() ? "HYBRID" : "DB")
.uniqueVessels(result.size())
.totalTracks(result.size())
.totalPoints(totalPoints)
.pointsAfterSimplify(totalPoints)
.totalChunks(1)
.responseBytes(responseBytes)
.elapsedMs(System.currentTimeMillis() - startMs)
.status("COMPLETED")
.clientIp(clientIp)
.clientId(clientId)
.build());
} catch (Exception e) {
log.debug("Failed to enqueue REST metric: {}", e.getMessage());
}
}
// 캐시 조회 로직
private List<CompactVesselTrack> queryWithCache(VesselTracksRequest request) {
@ -387,16 +328,24 @@ public class GisServiceV2 {
Set<String> requestedMmsis = new HashSet<>(request.getVessels());
// 1. L3 캐시에서 요청 MMSI만 O(1) 직접 조회 + 누락 MMSI 부분 DB fallback
// 1. 캐시에서 조회 (캐시된 날짜) + 누락 MMSI 부분 DB fallback
if (split.hasCachedData()) {
List<CompactVesselTrack> filteredCached =
dailyTrackCacheManager.getCachedTracksForVessels(split.getCachedDates(), requestedMmsis);
List<CompactVesselTrack> cachedTracks =
dailyTrackCacheManager.getCachedTracksMultipleDays(split.getCachedDates());
int totalCachedCount = cachedTracks.size();
List<CompactVesselTrack> filteredCached = cachedTracks.stream()
.filter(t -> requestedMmsis.contains(t.getVesselId()))
.map(t -> t.toBuilder().build())
.collect(Collectors.toList());
cachedTracks.clear();
allTracks.addAll(filteredCached);
log.debug("[CacheQuery] cached {} days -> {} tracks (key-based lookup, {} MMSI requested)",
split.getCachedDates().size(), filteredCached.size(), requestedMmsis.size());
log.debug("[CacheQuery] cached {} days -> {} tracks (filtered from {})",
split.getCachedDates().size(), filteredCached.size(), totalCachedCount);
// Daily 캐시에 없는 MMSI DB fallback
// Daily 캐시에 없는 MMSI DB fallback (hourly/5min 계층 조회)
Set<String> cachedMmsis = filteredCached.stream()
.map(CompactVesselTrack::getVesselId)
.collect(Collectors.toSet());
@ -434,22 +383,23 @@ public class GisServiceV2 {
}
}
// 3-a. hourly 범위 L2 캐시 O(1) 기반 조회 DB fallback (누락 MMSI)
// 3-a. hourly 범위 L2 캐시 DB fallback (누락 MMSI 부분 fallback 포함)
if (split.hasHourlyRange()) {
DailyTrackCacheManager.DateRange hr = split.getHourlyRange();
Map<String, List<VesselTrack>> hourlyTracks =
hourlyTrackCache.getTracksForVessels(hr.getStart(), hr.getEnd(), requestedMmsis);
hourlyTrackCache.getTracksInRange(hr.getStart(), hr.getEnd());
if (!hourlyTracks.isEmpty()) {
List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(hourlyTracks);
Map<String, List<VesselTrack>> filtered = filterByMmsi(hourlyTracks, requestedMmsis);
List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(filtered);
allTracks.addAll(converted);
int totalPts = converted.stream().mapToInt(CompactVesselTrack::getPointCount).sum();
log.info("[CACHE-MONITOR] queryWithCache L2 HIT [{}, {}): resultVessels={}, compactTracks={}, points={}",
hr.getStart(), hr.getEnd(), hourlyTracks.size(), converted.size(), totalPts);
log.info("[CACHE-MONITOR] queryWithCache L2 HIT [{}, {}): cacheVessels={}, filteredVessels={}, compactTracks={}, points={}",
hr.getStart(), hr.getEnd(), hourlyTracks.size(), filtered.size(), converted.size(), totalPts);
// 캐시에 없는 MMSI DB fallback
Set<String> missingMmsis = new HashSet<>(requestedMmsis);
missingMmsis.removeAll(hourlyTracks.keySet());
missingMmsis.removeAll(filtered.keySet());
if (!missingMmsis.isEmpty()) {
VesselTracksRequest fallbackReq = VesselTracksRequest.builder()
.startTime(hr.getStart()).endTime(hr.getEnd())
@ -457,7 +407,7 @@ public class GisServiceV2 {
List<CompactVesselTrack> dbResult = gisService.getVesselTracks(fallbackReq);
allTracks.addAll(dbResult);
log.info("[CACHE-MONITOR] queryWithCache L2 PARTIAL → DB fallback: cacheHit={}, cacheMiss={}, dbTracks={}",
hourlyTracks.size(), missingMmsis.size(), dbResult.size());
filtered.size(), missingMmsis.size(), dbResult.size());
}
} else {
VesselTracksRequest hourlyReq = VesselTracksRequest.builder()
@ -470,22 +420,23 @@ public class GisServiceV2 {
}
}
// 3-b. 5min 범위 L1 캐시 O(1) 기반 조회 DB fallback (누락 MMSI)
// 3-b. 5min 범위 L1 캐시 DB fallback (누락 MMSI 부분 fallback 포함)
if (split.hasFiveMinRange()) {
DailyTrackCacheManager.DateRange fr = split.getFiveMinRange();
Map<String, List<VesselTrack>> fiveMinTracks =
fiveMinTrackCache.getTracksForVessels(fr.getStart(), fr.getEnd(), requestedMmsis);
fiveMinTrackCache.getTracksInRange(fr.getStart(), fr.getEnd());
if (!fiveMinTracks.isEmpty()) {
List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(fiveMinTracks);
Map<String, List<VesselTrack>> filtered = filterByMmsi(fiveMinTracks, requestedMmsis);
List<CompactVesselTrack> converted = vesselTrackToCompactConverter.convert(filtered);
allTracks.addAll(converted);
int totalPts = converted.stream().mapToInt(CompactVesselTrack::getPointCount).sum();
log.info("[CACHE-MONITOR] queryWithCache L1 HIT [{}, {}): resultVessels={}, compactTracks={}, points={}",
fr.getStart(), fr.getEnd(), fiveMinTracks.size(), converted.size(), totalPts);
log.info("[CACHE-MONITOR] queryWithCache L1 HIT [{}, {}): cacheVessels={}, filteredVessels={}, compactTracks={}, points={}",
fr.getStart(), fr.getEnd(), fiveMinTracks.size(), filtered.size(), converted.size(), totalPts);
// 캐시에 없는 MMSI DB fallback
Set<String> missingMmsis = new HashSet<>(requestedMmsis);
missingMmsis.removeAll(fiveMinTracks.keySet());
missingMmsis.removeAll(filtered.keySet());
if (!missingMmsis.isEmpty()) {
VesselTracksRequest fallbackReq = VesselTracksRequest.builder()
.startTime(fr.getStart()).endTime(fr.getEnd())
@ -493,7 +444,7 @@ public class GisServiceV2 {
List<CompactVesselTrack> dbResult = gisService.getVesselTracks(fallbackReq);
allTracks.addAll(dbResult);
log.info("[CACHE-MONITOR] queryWithCache L1 PARTIAL → DB fallback: cacheHit={}, cacheMiss={}, dbTracks={}",
fiveMinTracks.size(), missingMmsis.size(), dbResult.size());
filtered.size(), missingMmsis.size(), dbResult.size());
}
} else {
VesselTracksRequest fiveMinReq = VesselTracksRequest.builder()
@ -508,6 +459,7 @@ public class GisServiceV2 {
// 4. 동일 선박 병합 (캐시 + DB 결과)
List<CompactVesselTrack> merged = mergeTracksByVessel(allTracks);
allTracks.clear();
return merged;
}

파일 보기

@ -1,15 +1,10 @@
package gc.mda.signal_batch.domain.gis.service;
import gc.mda.signal_batch.batch.reader.ChnPrmShipProperties;
import gc.mda.signal_batch.domain.gis.dto.VesselContactRequest;
import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse;
import gc.mda.signal_batch.domain.gis.dto.VesselContactResponse.*;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.TrackMemoryEstimator;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*;
@ -29,9 +24,6 @@ public class VesselContactService {
private final AreaSearchService areaSearchService;
private final DailyTrackCacheManager cacheManager;
private final ActiveQueryManager activeQueryManager;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final ChnPrmShipProperties chnPrmShipProperties;
private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory();
private static final double EARTH_RADIUS_M = 6_371_000.0;
@ -57,133 +49,103 @@ public class VesselContactService {
return buildEmptyResponse(request, targetDates, startMs);
}
// 3. 동시성·메모리 관리
String queryId = "contact-search-" + Long.toHexString(System.nanoTime());
boolean slotAcquired = false, memoryReserved = false;
try {
if (!activeQueryManager.tryAcquireQuerySlotImmediate(queryId)) {
if (!activeQueryManager.tryAcquireQuerySlot(queryId)) {
throw new QueryTimeoutException("서버 과부하: contact-search 슬롯 대기 타임아웃");
}
}
slotAcquired = true;
long estimatedBytes = TrackMemoryEstimator.estimateQueryBytes(targetDates.size(), 2000);
memoryBudgetManager.reserveQueryMemory(queryId, estimatedBytes, 30_000L);
memoryReserved = true;
Map<String, CompactVesselTrack> mergedTracks = areaSearchService.mergeMultipleDays(targetDates);
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, targetDates, startMs);
}
// 3-1. ChnPrmShip 필터링
if (request.isChnPrmShipOnly()) {
int totalBefore = mergedTracks.size();
Set<String> chnPrmMmsiSet = chnPrmShipProperties.getMmsiSet();
mergedTracks.entrySet().removeIf(e -> !chnPrmMmsiSet.contains(e.getKey()));
log.debug("ChnPrmShip 필터 적용: {} → {} 선박", totalBefore, mergedTracks.size());
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, targetDates, startMs);
}
}
// 4. JTS Polygon + PreparedGeometry
VesselContactRequest.SearchPolygon poly = request.getPolygon();
Polygon jtsPolygon = areaSearchService.toJtsPolygon(poly.getCoordinates());
PreparedGeometry prepared = PreparedGeometryFactory.prepare(jtsPolygon);
// 5. STRtree 후보 필터링 + 폴리곤 내부 포인트 수집
STRtree spatialIndex = areaSearchService.buildSpatialIndex(mergedTracks);
Envelope mbr = jtsPolygon.getEnvelopeInternal();
@SuppressWarnings("unchecked")
List<String> candidates = spatialIndex.query(mbr);
long minDurationSec = request.getMinContactDurationMinutes() * 60L;
double maxDistanceMeters = request.getMaxContactDistanceMeters();
Map<String, List<InsidePosition>> insidePositions = new HashMap<>();
for (String vesselId : candidates) {
CompactVesselTrack track = mergedTracks.get(vesselId);
if (track == null || track.getGeometry() == null) continue;
List<InsidePosition> inside = collectInsidePositions(track, prepared);
if (!inside.isEmpty()) {
insidePositions.put(vesselId, inside);
}
}
int totalVesselsInPolygon = insidePositions.size();
log.info("Vessel contact: merged={}, insidePolygon={}, chnPrmOnly={}, dates={}",
mergedTracks.size(), totalVesselsInPolygon, request.isChnPrmShipOnly(), targetDates.size());
// 6. 시간 범위 겹침 사전 필터 + 선박 쌍별 접촉 판정
List<String> vesselIds = new ArrayList<>(insidePositions.keySet());
List<VesselContactPair> contactPairs = new ArrayList<>();
Set<String> involvedVessels = new HashSet<>();
for (int i = 0; i < vesselIds.size(); i++) {
String idA = vesselIds.get(i);
List<InsidePosition> posA = insidePositions.get(idA);
long minTsA = posA.get(0).timestamp;
long maxTsA = posA.get(posA.size() - 1).timestamp;
for (int j = i + 1; j < vesselIds.size(); j++) {
String idB = vesselIds.get(j);
List<InsidePosition> posB = insidePositions.get(idB);
long minTsB = posB.get(0).timestamp;
long maxTsB = posB.get(posB.size() - 1).timestamp;
// 시간 겹침 사전 필터 (minContactDuration 반영)
long overlap = Math.min(maxTsA, maxTsB) - Math.max(minTsA, minTsB);
if (overlap < minDurationSec) continue;
// Two-pointer 접촉 판정
List<VesselContactPair> pairs = detectContacts(
idA, posA, idB, posB,
mergedTracks.get(idA), mergedTracks.get(idB),
minDurationSec, maxDistanceMeters);
if (!pairs.isEmpty()) {
contactPairs.addAll(pairs);
involvedVessels.add(idA);
involvedVessels.add(idB);
}
}
}
// 7. 관련 선박 트랙 수집
List<CompactVesselTrack> resultTracks = involvedVessels.stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Vessel contact completed: pairs={}, vessels={}, elapsed={}ms",
contactPairs.size(), involvedVessels.size(), elapsedMs);
return VesselContactResponse.builder()
.contacts(contactPairs)
.tracks(resultTracks)
.summary(VesselContactSummary.builder()
.totalContactPairs(contactPairs.size())
.totalVesselsInvolved(involvedVessels.size())
.totalVesselsInPolygon(totalVesselsInPolygon)
.processingTimeMs(elapsedMs)
.polygonId(poly.getId())
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.build())
.build();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new QueryTimeoutException("contact-search 슬롯 대기 중 인터럽트");
} finally {
if (memoryReserved) memoryBudgetManager.releaseQueryMemory(queryId);
if (slotAcquired) activeQueryManager.releaseQuerySlot(queryId);
Map<String, CompactVesselTrack> mergedTracks = areaSearchService.mergeMultipleDays(targetDates);
if (mergedTracks.isEmpty()) {
return buildEmptyResponse(request, targetDates, startMs);
}
// 3. 병합된 트랙을 직접 사용 (단일 수집원이므로 필터 불필요)
Map<String, CompactVesselTrack> filtered = mergedTracks;
// 4. JTS Polygon + PreparedGeometry
VesselContactRequest.SearchPolygon poly = request.getPolygon();
Polygon jtsPolygon = areaSearchService.toJtsPolygon(poly.getCoordinates());
PreparedGeometry prepared = PreparedGeometryFactory.prepare(jtsPolygon);
// 5. STRtree 후보 필터링 + 폴리곤 내부 포인트 수집
STRtree spatialIndex = areaSearchService.buildSpatialIndex(filtered);
Envelope mbr = jtsPolygon.getEnvelopeInternal();
@SuppressWarnings("unchecked")
List<String> candidates = spatialIndex.query(mbr);
long minDurationSec = request.getMinContactDurationMinutes() * 60L;
double maxDistanceMeters = request.getMaxContactDistanceMeters();
Map<String, List<InsidePosition>> insidePositions = new HashMap<>();
for (String vesselId : candidates) {
CompactVesselTrack track = filtered.get(vesselId);
if (track == null || track.getGeometry() == null) continue;
List<InsidePosition> inside = collectInsidePositions(track, prepared);
if (!inside.isEmpty()) {
insidePositions.put(vesselId, inside);
}
}
int totalVesselsInPolygon = insidePositions.size();
log.info("Vessel contact: filtered={}, insidePolygon={}, dates={}",
filtered.size(), totalVesselsInPolygon, targetDates.size());
// 6. 시간 범위 겹침 사전 필터 + 선박 쌍별 접촉 판정
List<String> vesselIds = new ArrayList<>(insidePositions.keySet());
List<VesselContactPair> contactPairs = new ArrayList<>();
Set<String> involvedVessels = new HashSet<>();
for (int i = 0; i < vesselIds.size(); i++) {
String idA = vesselIds.get(i);
List<InsidePosition> posA = insidePositions.get(idA);
long minTsA = posA.get(0).timestamp;
long maxTsA = posA.get(posA.size() - 1).timestamp;
for (int j = i + 1; j < vesselIds.size(); j++) {
String idB = vesselIds.get(j);
List<InsidePosition> posB = insidePositions.get(idB);
long minTsB = posB.get(0).timestamp;
long maxTsB = posB.get(posB.size() - 1).timestamp;
// 시간 겹침 사전 필터 (minContactDuration 반영)
long overlap = Math.min(maxTsA, maxTsB) - Math.max(minTsA, minTsB);
if (overlap < minDurationSec) continue;
// Two-pointer 접촉 판정
List<VesselContactPair> pairs = detectContacts(
idA, posA, idB, posB,
filtered.get(idA), filtered.get(idB),
minDurationSec, maxDistanceMeters);
if (!pairs.isEmpty()) {
contactPairs.addAll(pairs);
involvedVessels.add(idA);
involvedVessels.add(idB);
}
}
}
// 7. 관련 선박 트랙 수집
List<CompactVesselTrack> resultTracks = involvedVessels.stream()
.map(mergedTracks::get)
.filter(Objects::nonNull)
.collect(Collectors.toList());
long elapsedMs = System.currentTimeMillis() - startMs;
log.info("Vessel contact completed: pairs={}, vessels={}, elapsed={}ms",
contactPairs.size(), involvedVessels.size(), elapsedMs);
return VesselContactResponse.builder()
.contacts(contactPairs)
.tracks(resultTracks)
.summary(VesselContactSummary.builder()
.totalContactPairs(contactPairs.size())
.totalVesselsInvolved(involvedVessels.size())
.totalVesselsInPolygon(totalVesselsInPolygon)
.processingTimeMs(elapsedMs)
.polygonId(poly.getId())
.cachedDates(targetDates.stream()
.map(LocalDate::toString)
.collect(Collectors.toList()))
.build())
.build();
}
// 입력 검증
@ -211,13 +173,10 @@ public class VesselContactService {
List<double[]> geometry = track.getGeometry();
List<String> timestamps = track.getTimestamps();
List<InsidePosition> inside = new ArrayList<>();
Coordinate reusable = new Coordinate();
for (int i = 0; i < geometry.size(); i++) {
double[] coord = geometry.get(i);
reusable.x = coord[0];
reusable.y = coord[1];
Point point = GEOMETRY_FACTORY.createPoint(reusable);
Point point = GEOMETRY_FACTORY.createPoint(new Coordinate(coord[0], coord[1]));
if (prepared.contains(point)) {
long ts = parseTimestamp(timestamps, i);
inside.add(new InsidePosition(ts, coord[0], coord[1]));
@ -273,7 +232,7 @@ public class VesselContactService {
long diff = Math.abs(a.timestamp - b.timestamp);
if (diff <= SYNC_TOLERANCE_SEC) {
double dist = equirectangularMeters(a.lat, a.lon, b.lat, b.lon);
double dist = haversineMeters(a.lat, a.lon, b.lat, b.lon);
long ts = Math.min(a.timestamp, b.timestamp) + diff / 2; // 중간 시각
matched.add(new MatchedPoint(ts, dist, a, b));
pA++;
@ -319,19 +278,13 @@ public class VesselContactService {
long contactEnd = segment.get(segment.size() - 1).timestamp;
long durationMin = (contactEnd - contactStart) / 60;
// 단일 루프로 거리/중심점 동시 계산
double minDist = Double.MAX_VALUE, maxDist = 0, sumDist = 0;
double sumCenterLon = 0, sumCenterLat = 0;
for (MatchedPoint p : segment) {
if (p.distanceMeters < minDist) minDist = p.distanceMeters;
if (p.distanceMeters > maxDist) maxDist = p.distanceMeters;
sumDist += p.distanceMeters;
sumCenterLon += (p.posA.lon + p.posB.lon) / 2;
sumCenterLat += (p.posA.lat + p.posB.lat) / 2;
}
double avgDist = sumDist / segment.size();
double centerLon = sumCenterLon / segment.size();
double centerLat = sumCenterLat / segment.size();
DoubleSummaryStatistics distStats = segment.stream()
.mapToDouble(p -> p.distanceMeters)
.summaryStatistics();
// 접촉 중심점 계산
double centerLon = segment.stream().mapToDouble(p -> (p.posA.lon + p.posB.lon) / 2).average().orElse(0);
double centerLat = segment.stream().mapToDouble(p -> (p.posA.lat + p.posB.lat) / 2).average().orElse(0);
// 선박의 접촉 구간 inside 포인트로 추정 속도 계산
double speedA = estimateAvgSpeed(insidePosA, contactStart, contactEnd);
@ -346,9 +299,9 @@ public class VesselContactService {
.contactStartTimestamp(contactStart)
.contactEndTimestamp(contactEnd)
.contactDurationMinutes(durationMin)
.minDistanceMeters(Math.round(minDist * 10.0) / 10.0)
.avgDistanceMeters(Math.round(avgDist * 10.0) / 10.0)
.maxDistanceMeters(Math.round(maxDist * 10.0) / 10.0)
.minDistanceMeters(Math.round(distStats.getMin() * 10.0) / 10.0)
.avgDistanceMeters(Math.round(distStats.getAverage() * 10.0) / 10.0)
.maxDistanceMeters(Math.round(distStats.getMax() * 10.0) / 10.0)
.contactCenterPoint(new double[]{
Math.round(centerLon * 1_000_000.0) / 1_000_000.0,
Math.round(centerLat * 1_000_000.0) / 1_000_000.0})
@ -407,15 +360,27 @@ public class VesselContactService {
* 접촉 구간이 22:00~06:00 KST에 포함되는지 판단.
*/
private boolean isNightTimeContact(long contactStartSec, long contactEndSec) {
ZonedDateTime startKst = Instant.ofEpochSecond(contactStartSec).atZone(KST);
ZonedDateTime endKst = Instant.ofEpochSecond(contactEndSec).atZone(KST);
Instant startInstant = Instant.ofEpochSecond(contactStartSec);
Instant endInstant = Instant.ofEpochSecond(contactEndSec);
// 날짜의 야간 구간(22:00~익일 06:00) 접촉 구간 겹침 체크
ZonedDateTime startKst = startInstant.atZone(KST);
ZonedDateTime endKst = endInstant.atZone(KST);
// 접촉 구간 모든 날짜에 대해 야간 시간대 겹침 체크
LocalDate day = startKst.toLocalDate();
while (!day.isAfter(endKst.toLocalDate())) {
ZonedDateTime nightStart = day.atTime(22, 0).atZone(KST);
ZonedDateTime nightEnd = day.plusDays(1).atTime(6, 0).atZone(KST);
if (startKst.isBefore(nightEnd) && endKst.isAfter(nightStart)) {
LocalDate lastDay = endKst.toLocalDate().plusDays(1);
while (!day.isAfter(lastDay)) {
// 해당 날짜의 야간: 전날 22:00 ~ 당일 06:00
ZonedDateTime nightStart = day.atTime(LocalTime.of(22, 0)).atZone(KST).minusDays(1);
ZonedDateTime nightEnd = day.atTime(LocalTime.of(6, 0)).atZone(KST);
// 당일 22:00 ~ 다음날 06:00
ZonedDateTime nightStart2 = day.atTime(LocalTime.of(22, 0)).atZone(KST);
ZonedDateTime nightEnd2 = day.plusDays(1).atTime(LocalTime.of(6, 0)).atZone(KST);
if (isOverlapping(startKst, endKst, nightStart, nightEnd)
|| isOverlapping(startKst, endKst, nightStart2, nightEnd2)) {
return true;
}
day = day.plusDays(1);
@ -423,6 +388,11 @@ public class VesselContactService {
return false;
}
private boolean isOverlapping(ZonedDateTime s1, ZonedDateTime e1,
ZonedDateTime s2, ZonedDateTime e2) {
return s1.isBefore(e2) && s2.isBefore(e1);
}
// 추정 속도 계산
/**
@ -454,16 +424,16 @@ public class VesselContactService {
return totalHours > 0 ? totalDistNm / totalHours : 0.0;
}
// 거리 계산
// Haversine 거리 계산
/**
* Equirectangular 근사 접촉 거리 판정용 (10km 이내 오차 < 0.1%)
* Haversine 대비 ~2배 빠름 (Math.cos 1회 + Math.sqrt 1회)
*/
private double equirectangularMeters(double lat1, double lon1, double lat2, double lon2) {
private double haversineMeters(double lat1, double lon1, double lat2, double lon2) {
double dLat = Math.toRadians(lat2 - lat1);
double dLon = Math.toRadians(lon2 - lon1) * Math.cos(Math.toRadians((lat1 + lat2) / 2));
return EARTH_RADIUS_M * Math.sqrt(dLat * dLat + dLon * dLon);
double dLon = Math.toRadians(lon2 - lon1);
double a = Math.sin(dLat / 2) * Math.sin(dLat / 2)
+ Math.cos(Math.toRadians(lat1)) * Math.cos(Math.toRadians(lat2))
* Math.sin(dLon / 2) * Math.sin(dLon / 2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return EARTH_RADIUS_M * c;
}
private double haversineNm(double lat1, double lon1, double lat2, double lon2) {

파일 보기

@ -72,10 +72,10 @@ public class SequentialPassageController {
.collect(Collectors.toList());
results = trackingService.findSequentialGridPassages(
haeguNumbers, request.getStartTime(), request.getEndTime(), request.isChnPrmShipOnly());
haeguNumbers, request.getStartTime(), request.getEndTime());
} else {
results = trackingService.findSequentialAreaPassages(
request.getZoneIds(), request.getStartTime(), request.getEndTime(), request.isChnPrmShipOnly());
request.getZoneIds(), request.getStartTime(), request.getEndTime());
}
// 응답 구성

파일 보기

@ -57,10 +57,6 @@ public class SequentialPassageRequest {
@Schema(description = "순차 통과 여부 (true: 순서대로 통과, false: 모든 구역 통과)", example = "true", defaultValue = "true")
@Builder.Default
private Boolean sequentialOnly = true;
@Schema(description = "true 시 중국허가선박만 대상으로 순차 통과 조회", example = "false")
@Builder.Default
private boolean chnPrmShipOnly = false;
public enum PassageType {
GRID, AREA

파일 보기

@ -1,6 +1,5 @@
package gc.mda.signal_batch.domain.passage.service;
import gc.mda.signal_batch.batch.reader.ChnPrmShipProperties;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
@ -9,10 +8,8 @@ import org.springframework.stereotype.Service;
import javax.sql.DataSource;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;
/**
* 순차 구역 통과 선박 조회 최적화 서비스
@ -25,140 +22,120 @@ import java.util.Set;
public class SequentialAreaTrackingService {
private final DataSource queryDataSource;
private final ChnPrmShipProperties chnPrmShipProperties;
public SequentialAreaTrackingService(@Qualifier("queryDataSource") DataSource queryDataSource,
ChnPrmShipProperties chnPrmShipProperties) {
public SequentialAreaTrackingService(@Qualifier("queryDataSource") DataSource queryDataSource) {
this.queryDataSource = queryDataSource;
this.chnPrmShipProperties = chnPrmShipProperties;
}
/**
* 순차적으로 지정된 구역들을 통과한 선박 조회 (Grid)
* 동적 N-구역 SQL JOIN 생성 (2~10개)
*/
public List<Map<String, Object>> findSequentialGridPassages(
List<Integer> haeguNumbers,
LocalDateTime startTime,
LocalDateTime endTime,
boolean chnPrmShipOnly) {
int n = haeguNumbers.size();
if (n < 2 || n > 10) {
throw new IllegalArgumentException("구역은 2~10개까지 지정 가능합니다: " + n);
}
LocalDateTime endTime) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
StringBuilder sql = new StringBuilder();
sql.append("WITH vessel_passages AS (\n");
sql.append(" SELECT DISTINCT mmsi, haegu_no,\n");
sql.append(" FIRST_VALUE(time_bucket) OVER (PARTITION BY mmsi, haegu_no ORDER BY time_bucket) as entry_time,\n");
sql.append(" LAST_VALUE(time_bucket) OVER (PARTITION BY mmsi, haegu_no ORDER BY time_bucket ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as exit_time\n");
sql.append(" FROM signal.t_grid_vessel_tracks\n");
sql.append(" WHERE time_bucket BETWEEN ? AND ?\n");
sql.append(" AND haegu_no = ANY(ARRAY[?]::integer[])\n");
if (chnPrmShipOnly) {
sql.append(" AND mmsi = ANY(ARRAY[?]::varchar[])\n");
}
sql.append(")\n");
// SELECT 컬럼 동적 생성
sql.append("SELECT v1.mmsi");
for (int i = 1; i <= n; i++) {
sql.append(String.format(", v%d.entry_time as haegu%d_entry, v%d.exit_time as haegu%d_exit", i, i, i, i));
}
sql.append("\nFROM vessel_passages v1\n");
// JOIN 동적 생성 (v2~vN)
for (int i = 2; i <= n; i++) {
sql.append(String.format("JOIN vessel_passages v%d ON v%d.mmsi = v1.mmsi AND v%d.haegu_no = ? AND v%d.entry_time > v%d.exit_time\n",
i, i, i, i, i - 1));
}
sql.append("WHERE v1.haegu_no = ?\n");
sql.append("ORDER BY v1.entry_time");
// 파라미터 구성
List<Object> params = new ArrayList<>();
params.add(Timestamp.valueOf(startTime));
params.add(Timestamp.valueOf(endTime));
params.add(haeguNumbers.toArray(Integer[]::new));
if (chnPrmShipOnly) {
Set<String> mmsiSet = chnPrmShipProperties.getMmsiSet();
params.add(mmsiSet.toArray(String[]::new));
}
// v2~vN의 haegu_no 파라미터
for (int i = 1; i < n; i++) {
params.add(haeguNumbers.get(i));
}
// v1의 haegu_no WHERE 조건
params.add(haeguNumbers.get(0));
return jdbcTemplate.queryForList(sql.toString(), params.toArray());
// MATERIALIZED CTE 사용으로 중간 결과 고정
String sql = """
WITH vessel_passages AS (
SELECT DISTINCT
mmsi,
haegu_no,
FIRST_VALUE(time_bucket) OVER (
PARTITION BY mmsi, haegu_no
ORDER BY time_bucket
) as entry_time,
LAST_VALUE(time_bucket) OVER (
PARTITION BY mmsi, haegu_no
ORDER BY time_bucket
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) as exit_time
FROM signal.t_grid_vessel_tracks
WHERE time_bucket BETWEEN ? AND ?
AND haegu_no = ANY(ARRAY[?]::integer[])
)
SELECT
v1.mmsi,
v1.entry_time as haegu1_entry,
v1.exit_time as haegu1_exit,
v2.entry_time as haegu2_entry,
v2.exit_time as haegu2_exit,
v3.entry_time as haegu3_entry,
v3.exit_time as haegu3_exit
FROM vessel_passages v1
JOIN vessel_passages v2 ON v1.mmsi = v2.mmsi
AND v2.haegu_no = ? AND v2.entry_time > v1.exit_time
JOIN vessel_passages v3 ON v2.mmsi = v3.mmsi
AND v3.haegu_no = ? AND v3.entry_time > v2.exit_time
WHERE v1.haegu_no = ?
ORDER BY v1.entry_time
""";
return jdbcTemplate.queryForList(sql,
Timestamp.valueOf(startTime),
Timestamp.valueOf(endTime),
haeguNumbers.toArray(Integer[]::new),
haeguNumbers.get(1),
haeguNumbers.get(2),
haeguNumbers.get(0)
);
}
/**
* 순차적으로 지정된 구역들을 통과한 선박 조회 (Area)
* 동적 N-구역 SQL JOIN 생성 (2~10개)
*/
public List<Map<String, Object>> findSequentialAreaPassages(
List<String> areaIds,
LocalDateTime startTime,
LocalDateTime endTime,
boolean chnPrmShipOnly) {
int n = areaIds.size();
if (n < 2 || n > 10) {
throw new IllegalArgumentException("구역은 2~10개까지 지정 가능합니다: " + n);
}
LocalDateTime endTime) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
StringBuilder sql = new StringBuilder();
sql.append("WITH area_passages AS (\n");
sql.append(" SELECT DISTINCT mmsi, area_id,\n");
sql.append(" FIRST_VALUE(time_bucket) OVER (PARTITION BY mmsi, area_id ORDER BY time_bucket) as entry_time,\n");
sql.append(" LAST_VALUE(time_bucket) OVER (PARTITION BY mmsi, area_id ORDER BY time_bucket ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as exit_time\n");
sql.append(" FROM signal.t_area_vessel_tracks\n");
sql.append(" WHERE time_bucket BETWEEN ? AND ?\n");
sql.append(" AND area_id = ANY(ARRAY[?]::varchar[])\n");
if (chnPrmShipOnly) {
sql.append(" AND mmsi = ANY(ARRAY[?]::varchar[])\n");
}
sql.append(")\n");
// SELECT 컬럼 동적 생성
sql.append("SELECT a1.mmsi");
for (int i = 1; i <= n; i++) {
sql.append(String.format(", a%d.entry_time as area%d_entry, a%d.exit_time as area%d_exit", i, i, i, i));
}
sql.append("\nFROM area_passages a1\n");
// JOIN 동적 생성 (a2~aN)
for (int i = 2; i <= n; i++) {
sql.append(String.format("JOIN area_passages a%d ON a%d.mmsi = a1.mmsi AND a%d.area_id = ? AND a%d.entry_time > a%d.exit_time\n",
i, i, i, i, i - 1));
}
sql.append("WHERE a1.area_id = ?\n");
sql.append("ORDER BY a1.entry_time");
// 파라미터 구성
List<Object> params = new ArrayList<>();
params.add(Timestamp.valueOf(startTime));
params.add(Timestamp.valueOf(endTime));
params.add(areaIds.toArray(String[]::new));
if (chnPrmShipOnly) {
Set<String> mmsiSet = chnPrmShipProperties.getMmsiSet();
params.add(mmsiSet.toArray(String[]::new));
}
// a2~aN의 area_id 파라미터
for (int i = 1; i < n; i++) {
params.add(areaIds.get(i));
}
// a1의 area_id WHERE 조건
params.add(areaIds.get(0));
return jdbcTemplate.queryForList(sql.toString(), params.toArray());
String sql = """
WITH area_passages AS (
SELECT DISTINCT
mmsi,
area_id,
FIRST_VALUE(time_bucket) OVER (
PARTITION BY mmsi, area_id
ORDER BY time_bucket
) as entry_time,
LAST_VALUE(time_bucket) OVER (
PARTITION BY mmsi, area_id
ORDER BY time_bucket
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) as exit_time
FROM signal.t_area_vessel_tracks
WHERE time_bucket BETWEEN ? AND ?
AND area_id = ANY(ARRAY[?]::varchar[])
)
SELECT
a1.mmsi,
a1.entry_time as area1_entry,
a1.exit_time as area1_exit,
a2.entry_time as area2_entry,
a2.exit_time as area2_exit,
a3.entry_time as area3_entry,
a3.exit_time as area3_exit
FROM area_passages a1
JOIN area_passages a2 ON a1.mmsi = a2.mmsi
AND a2.area_id = ? AND a2.entry_time > a1.exit_time
JOIN area_passages a3 ON a2.mmsi = a3.mmsi
AND a3.area_id = ? AND a3.entry_time > a2.exit_time
WHERE a1.area_id = ?
ORDER BY a1.entry_time
""";
return jdbcTemplate.queryForList(sql,
Timestamp.valueOf(startTime),
Timestamp.valueOf(endTime),
areaIds.toArray(String[]::new),
areaIds.get(1),
areaIds.get(2),
areaIds.get(0)
);
}
/**

파일 보기

@ -1,61 +0,0 @@
package gc.mda.signal_batch.domain.vessel.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* 최근 선박 위치 상세 조회 요청
*
* 공간 필터 사용법:
* - 폴리곤/사각형: coordinates에 닫힌 좌표 배열 전달
* - : center + radiusNm 전달 (서버에서 64점 폴리곤으로 변환)
* - 전체 조회: coordinates와 center 모두 null
*/
@Getter
@Builder
@NoArgsConstructor
@AllArgsConstructor
@Schema(description = "최근 선박 위치 상세 조회 요청 (공간 필터 지원)")
public class RecentPositionDetailRequest {
@Schema(description = "조회 시간 범위 (분 단위, 1~1440)", example = "5")
@Builder.Default
private int minutes = 5;
@Schema(description = "폴리곤/사각형 좌표 배열 [[lon,lat],...] — 첫점과 끝점 동일",
example = "[[125,33],[130,33],[130,37],[125,37],[125,33]]")
private List<double[]> coordinates;
@Schema(description = "원 중심 좌표 [lon, lat]", example = "[129, 35]")
private double[] center;
@Schema(description = "원 반경 (해리, NM)", example = "50")
private Double radiusNm;
/**
* 공간 필터가 지정되었는지 확인
*/
public boolean hasSpatialFilter() {
return (coordinates != null && !coordinates.isEmpty())
|| (center != null && radiusNm != null);
}
/**
* 원형 필터인지 확인
*/
public boolean isCircleFilter() {
return center != null && center.length == 2 && radiusNm != null;
}
/**
* 폴리곤/사각형 필터인지 확인
*/
public boolean isPolygonFilter() {
return coordinates != null && coordinates.size() >= 4;
}
}

파일 보기

@ -1,87 +0,0 @@
package gc.mda.signal_batch.domain.vessel.dto;
import com.fasterxml.jackson.annotation.JsonFormat;
import com.fasterxml.jackson.annotation.JsonInclude;
import io.swagger.v3.oas.annotations.media.Schema;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 최근 선박 위치 상세 응답
*
* 기존 RecentVesselPositionDto 전체 필드 + AIS 상세 정보 확장
*/
@JsonInclude(JsonInclude.Include.NON_NULL)
@Schema(description = "최근 선박 위치 상세 정보 (AIS 확장 필드 포함)")
public record RecentPositionDetailResponse(
// 기존 필드 (RecentVesselPositionDto 호환)
@Schema(description = "MMSI", example = "440113620")
String mmsi,
@Schema(description = "IMO 번호", example = "9141833")
Long imo,
@Schema(description = "경도 (WGS84)", example = "127.0638")
Double lon,
@Schema(description = "위도 (WGS84)", example = "34.227527")
Double lat,
@Schema(description = "대지속도 (knots)", example = "10.4")
BigDecimal sog,
@Schema(description = "대지침로 (도)", example = "215.3")
BigDecimal cog,
@Schema(description = "선박명", example = "SAM SUNG 2HO")
String shipNm,
@Schema(description = "선박 유형 (AIS ship type)", example = "74")
String shipTy,
@Schema(description = "선박 종류 코드", example = "000023")
String shipKindCode,
@Schema(description = "국가 코드 (MID 기반)", example = "KR")
String nationalCode,
@Schema(description = "최종 업데이트 시간", example = "2026-03-17 12:05:00")
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss")
LocalDateTime lastUpdate,
@Schema(description = "선박 사진 썸네일 경로")
String shipImagePath,
@Schema(description = "선박 사진 수")
Integer shipImageCount,
// 확장 필드 (AIS 상세)
@Schema(description = "침로 (0~360도)", example = "215.0")
Double heading,
@Schema(description = "호출 부호", example = "HLBQ")
String callSign,
@Schema(description = "항해 상태", example = "Under way using engine")
String status,
@Schema(description = "목적지", example = "BUSAN")
String destination,
@Schema(description = "도착 예정시간", example = "2026-03-18 08:00:00")
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss")
LocalDateTime eta,
@Schema(description = "흘수 (m)", example = "6.5")
Double draught,
@Schema(description = "선박 길이 (m)", example = "180")
Integer length,
@Schema(description = "선박 폭 (m)", example = "28")
Integer width
) {}

파일 보기

@ -1,189 +0,0 @@
package gc.mda.signal_batch.domain.vessel.service;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.domain.ship.service.ShipImageService;
import gc.mda.signal_batch.domain.ship.service.ShipImageService.ShipImageSummary;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailRequest;
import gc.mda.signal_batch.domain.vessel.dto.RecentPositionDetailResponse;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*;
import org.locationtech.jts.geom.prep.PreparedGeometry;
import org.locationtech.jts.geom.prep.PreparedGeometryFactory;
import org.springframework.stereotype.Service;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.time.LocalDateTime;
import java.time.OffsetDateTime;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
/**
* 최근 선박 위치 상세 조회 서비스
*
* AisTargetCacheManager(~33K, 1분 갱신)에서 직접 조회하여
* 시간 필터 + 공간 필터(폴리곤/) 적용 상세 정보 반환
*/
@Slf4j
@Service
@RequiredArgsConstructor
public class VesselPositionDetailService {
private final AisTargetCacheManager aisTargetCacheManager;
private final ShipImageService shipImageService;
private static final GeometryFactory GEOMETRY_FACTORY = new GeometryFactory();
private static final int CIRCLE_POINTS = 64;
private static final double EARTH_RADIUS_NM = 3440.065;
private static final ZoneId KST = ZoneId.of("Asia/Seoul");
/**
* 최근 선박 위치 상세 조회
*/
public List<RecentPositionDetailResponse> getRecentPositionsDetail(RecentPositionDetailRequest request) {
long startMs = System.currentTimeMillis();
Collection<AisTargetEntity> allEntities = aisTargetCacheManager.getAllValues();
OffsetDateTime threshold = OffsetDateTime.now().minusMinutes(request.getMinutes());
// 공간 필터 준비 (null이면 전체)
PreparedGeometry spatialFilter = buildSpatialFilter(request);
// 단일 루프: 시간 필터 + 공간 필터 + 변환
List<RecentPositionDetailResponse> results = new ArrayList<>(1000);
Coordinate reusable = new Coordinate();
for (AisTargetEntity entity : allEntities) {
// 시간 필터
if (entity.getMessageTimestamp() == null || entity.getMessageTimestamp().isBefore(threshold)) {
continue;
}
// 위치 필수
if (entity.getLat() == null || entity.getLon() == null) {
continue;
}
// 공간 필터
if (spatialFilter != null) {
reusable.x = entity.getLon();
reusable.y = entity.getLat();
Point point = GEOMETRY_FACTORY.createPoint(reusable);
if (!spatialFilter.contains(point)) {
continue;
}
}
results.add(toResponse(entity));
}
log.debug("recent-positions-detail: {}건 / {}ms (전체: {}, minutes: {})",
results.size(), System.currentTimeMillis() - startMs,
allEntities.size(), request.getMinutes());
return results;
}
/**
* 요청에서 공간 필터(PreparedGeometry) 생성
*/
private PreparedGeometry buildSpatialFilter(RecentPositionDetailRequest request) {
if (!request.hasSpatialFilter()) {
return null;
}
Polygon polygon;
if (request.isCircleFilter()) {
polygon = createCirclePolygon(
request.getCenter()[0], request.getCenter()[1],
request.getRadiusNm());
} else if (request.isPolygonFilter()) {
polygon = createPolygonFromCoordinates(request.getCoordinates());
} else {
return null;
}
return PreparedGeometryFactory.prepare(polygon);
}
/**
* 좌표 배열 JTS Polygon
*/
private Polygon createPolygonFromCoordinates(List<double[]> coordinates) {
Coordinate[] coords = new Coordinate[coordinates.size()];
for (int i = 0; i < coordinates.size(); i++) {
double[] c = coordinates.get(i);
coords[i] = new Coordinate(c[0], c[1]);
}
return GEOMETRY_FACTORY.createPolygon(coords);
}
/**
* 64점 폴리곤 변환 (equirectangular 근사)
*/
private Polygon createCirclePolygon(double centerLon, double centerLat, double radiusNm) {
double radiusRad = radiusNm / EARTH_RADIUS_NM;
double cosLat = Math.cos(Math.toRadians(centerLat));
Coordinate[] coords = new Coordinate[CIRCLE_POINTS + 1];
for (int i = 0; i < CIRCLE_POINTS; i++) {
double angle = 2.0 * Math.PI * i / CIRCLE_POINTS;
double dLat = Math.toDegrees(radiusRad * Math.cos(angle));
double dLon = Math.toDegrees(radiusRad * Math.sin(angle) / cosLat);
coords[i] = new Coordinate(centerLon + dLon, centerLat + dLat);
}
coords[CIRCLE_POINTS] = coords[0]; // 닫기
return GEOMETRY_FACTORY.createPolygon(coords);
}
/**
* AisTargetEntity RecentPositionDetailResponse 변환
*/
private RecentPositionDetailResponse toResponse(AisTargetEntity e) {
String mmsi = e.getMmsi();
String nationalCode = mmsi != null && mmsi.length() >= 3 ? mmsi.substring(0, 3) : "000";
String shipKindCode = e.getSignalKindCode() != null ? e.getSignalKindCode() : "000027";
Long imo = e.getImo() != null && e.getImo() > 0 ? e.getImo() : null;
// ShipImage enrichment
ShipImageSummary img = shipImageService.getImageSummary(imo);
return new RecentPositionDetailResponse(
mmsi,
imo,
round6(e.getLon()),
round6(e.getLat()),
scaleDecimal(e.getSog(), 1),
scaleDecimal(e.getCog(), 1),
e.getName(),
e.getVesselType(),
shipKindCode,
nationalCode,
toLocalDateTime(e.getMessageTimestamp()),
img != null ? img.thumbnailPath() : null,
img != null ? img.imageCount() : null,
// 확장 필드
e.getHeading(),
e.getCallsign(),
e.getStatus(),
e.getDestination(),
toLocalDateTime(e.getEta()),
e.getDraught(),
e.getLength(),
e.getWidth()
);
}
private static Double round6(Double value) {
return value != null ? Math.round(value * 1_000_000) / 1_000_000.0 : null;
}
private static BigDecimal scaleDecimal(Double value, int scale) {
return value != null ? BigDecimal.valueOf(value).setScale(scale, RoundingMode.HALF_UP) : null;
}
private static LocalDateTime toLocalDateTime(OffsetDateTime odt) {
return odt != null ? odt.atZoneSameInstant(KST).toLocalDateTime() : null;
}
}

파일 보기

@ -3,6 +3,7 @@ package gc.mda.signal_batch.domain.vessel.service;
import gc.mda.signal_batch.domain.ship.service.ShipImageService;
import gc.mda.signal_batch.domain.ship.service.ShipImageService.ShipImageSummary;
import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto;
import gc.mda.signal_batch.global.util.SignalKindCode;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
@ -123,7 +124,6 @@ public class VesselPositionService {
cog,
name as ship_nm,
vessel_type as ship_ty,
signal_kind_code,
last_update
FROM signal.t_ais_position
WHERE last_update >= NOW() - INTERVAL '%d minutes'
@ -145,9 +145,8 @@ public class VesselPositionService {
String mmsi = rs.getString("mmsi");
String shipTy = rs.getString("ship_ty");
// shipKindCode: DB에 저장된 치환값 사용
String signalKindCode = rs.getString("signal_kind_code");
String shipKindCode = signalKindCode != null ? signalKindCode : "000027";
// shipKindCode 계산 (vesselType 기반, extraInfo 없음)
String shipKindCode = SignalKindCode.resolve(shipTy, null).getCode();
// nationalCode 계산 (MMSI 3자리 = MID)
String nationalCode = mmsi != null && mmsi.length() >= 3

파일 보기

@ -12,7 +12,7 @@ import org.springframework.web.reactive.function.client.WebClient;
*
* API: POST /AisSvc.svc/AIS/GetTargetsEnhanced
* 인증: Basic Authentication
* 버퍼: 100MB (AIS GetTargets 응답 ~20MB+, 피크 50MB 초과 대응)
* 버퍼: 50MB (AIS GetTargets 응답 ~20MB+)
*/
@Slf4j
@Configuration
@ -37,7 +37,7 @@ public class AisApiWebClientConfig {
.defaultHeaders(headers -> headers.setBasicAuth(aisApiUsername, aisApiPassword))
.codecs(configurer -> configurer
.defaultCodecs()
.maxInMemorySize(100 * 1024 * 1024))
.maxInMemorySize(50 * 1024 * 1024))
.build();
}
}

파일 보기

@ -1,45 +0,0 @@
package gc.mda.signal_batch.global.config;
import lombok.Getter;
import lombok.Setter;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
/**
* 항적 데이터 메모리 예산 설정
*
* 64GB JVM 기준 파티셔닝:
* 캐시 35GB (55%) L1/L2/L3
* 쿼리 20GB (31%) REST/WebSocket 동시 쿼리
* 시스템 9GB (14%) GC, 스레드스택, Spring 컨텍스트 (미추적)
*/
@Getter
@Setter
@Component
@ConfigurationProperties(prefix = "track.memory-budget")
public class TrackMemoryBudgetProperties {
/** 전체 JVM 힙 예산 (GB) */
private int totalBudgetGb = 64;
/** 캐시 전용 예산 (GB) — L1+L2+L3 전체 */
private int cacheBudgetGb = 35;
/** 쿼리 응답 전용 예산 (GB) */
private int queryBudgetGb = 20;
/** 단일 쿼리 최대 메모리 (GB) */
private int maxSingleQueryGb = 5;
/** 메모리 추정 보정 계수 (실측 기반) */
private double estimationCorrectionFactor = 1.8;
/** 쿼리 메모리 대기 큐 타임아웃 (초) */
private int queueTimeoutSeconds = 60;
/** 예산 경고 임계값 (0.0~1.0) */
private double warningThreshold = 0.8;
/** 예산 위험 임계값 (0.0~1.0) */
private double criticalThreshold = 0.95;
}

파일 보기

@ -22,10 +22,8 @@ import org.springframework.http.server.ServletServerHttpRequest;
import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.servlet.http.Cookie;
import jakarta.servlet.http.HttpServletRequest;
import java.security.Principal;
import java.util.Base64;
import java.util.List;
import java.util.Map;
import java.util.UUID;
@ -182,18 +180,11 @@ public class WebSocketStompConfig implements WebSocketMessageBrokerConfigurer {
String clientIp = extractClientIp(request);
attributes.put("CLIENT_IP", clientIp);
// User-Agent 추출
if (request instanceof ServletServerHttpRequest) {
HttpServletRequest servletRequest = ((ServletServerHttpRequest) request).getServletRequest();
// User-Agent 추출
String userAgent = servletRequest.getHeader("User-Agent");
attributes.put("USER_AGENT", userAgent);
// GC_SESSION 쿠키에서 JWT email 추출 (guide 서비스 인증)
String clientId = extractEmailFromJwtCookie(servletRequest);
if (clientId != null) {
attributes.put("CLIENT_ID", clientId);
}
}
return true;
@ -234,45 +225,5 @@ public class WebSocketStompConfig implements WebSocketMessageBrokerConfigurer {
// ServletServerHttpRequest가 아닌 경우 기본값
return "unknown";
}
private String extractEmailFromJwtCookie(HttpServletRequest request) {
return extractClientIdFromRequest(request);
}
}
/**
* GC_SESSION 쿠키에서 JWT payload의 email 클레임 추출 (REST/WebSocket 공용).
* JWT 검증은 nginx auth_request에서 이미 완료 여기서는 payload 디코딩만 수행.
*/
public static String extractClientIdFromRequest(HttpServletRequest request) {
Cookie[] cookies = request.getCookies();
if (cookies == null) return null;
String token = null;
for (Cookie cookie : cookies) {
if ("GC_SESSION".equals(cookie.getName())) {
token = cookie.getValue();
break;
}
}
if (token == null || token.isEmpty()) return null;
try {
String[] parts = token.split("\\.");
if (parts.length < 2) return null;
String payload = new String(Base64.getUrlDecoder().decode(parts[1]));
int emailIdx = payload.indexOf("\"email\"");
if (emailIdx < 0) return null;
int colonIdx = payload.indexOf(':', emailIdx);
int quoteStart = payload.indexOf('"', colonIdx + 1);
int quoteEnd = payload.indexOf('"', quoteStart + 1);
if (quoteStart < 0 || quoteEnd < 0) return null;
return payload.substring(quoteStart + 1, quoteEnd);
} catch (Exception e) {
return null;
}
}
}

파일 보기

@ -1,16 +0,0 @@
package gc.mda.signal_batch.global.exception;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
/**
* 메모리 예산 초과 발생하는 예외 (503 Service Unavailable)
*
* 단일 쿼리 상한 초과, 대기 타임아웃, 전체 쿼리 예산 부족 발생.
*/
@ResponseStatus(HttpStatus.SERVICE_UNAVAILABLE)
public class MemoryBudgetExceededException extends RuntimeException {
public MemoryBudgetExceededException(String message) {
super(message);
}
}

파일 보기

@ -6,11 +6,10 @@ import lombok.RequiredArgsConstructor;
/**
* MDA 선종 범례코드
*
* S&P Global AIS API의 vesselType + extraInfo + shipName을 기반으로
* S&P Global AIS API의 vesselType + extraInfo 기반으로
* MDA 범례코드(signalKindCode) 치환한다.
*
* 치환은 캐시 저장 (AisTargetCacheWriter) 1회만 수행하며,
* API 응답 시에는 캐시 또는 DB의 signal_kind_code를 직접 사용한다.
* ShipKindCodeConverter를 대체하며, SNP-Batch-1의 치환 로직을 이식.
*/
@Getter
@RequiredArgsConstructor
@ -29,32 +28,18 @@ public enum SignalKindCode {
private final String koreanName;
/**
* vesselType + extraInfo MDA 범례코드 치환 (하위 호환용)
* shipName 기반 BUOY 검출 불가 캐시 저장 시에는 3-파라미터 버전 사용 권장.
*/
public static SignalKindCode resolve(String vesselType, String extraInfo) {
return resolve(vesselType, extraInfo, null);
}
/**
* vesselType + extraInfo + shipName MDA 범례코드 치환
* vesselType + extraInfo MDA 범례코드 치환
*
* 치환 우선순위:
* 1. shipName 기반 BUOY 검출 ('.' '_' 문자가 2개 이상 부이/항로표지)
* 2. vesselType 단독 매칭 (Cargo, Tanker, Passenger )
* 3. vesselType + extraInfo 조합 매칭 (Vessel + Fishing )
* 4. fallback DEFAULT (000027)
* 1. vesselType 단독 매칭 (Cargo, Tanker, Passenger, AtoN )
* 2. vesselType + extraInfo 조합 매칭 (Vessel + Fishing )
* 3. fallback DEFAULT (000027)
*/
public static SignalKindCode resolve(String vesselType, String extraInfo, String shipName) {
// 1. shipName 기반 BUOY 검출: '.' 또는 '_' 문자가 2개 이상
if (hasBuoyNamePattern(shipName)) {
return BUOY;
}
public static SignalKindCode resolve(String vesselType, String extraInfo) {
String vt = normalizeOrEmpty(vesselType);
String ei = normalizeOrEmpty(extraInfo);
// 2. vesselType 단독 매칭
// 1. vesselType 단독 매칭
switch (vt) {
case "cargo":
return CARGO;
@ -63,7 +48,7 @@ public enum SignalKindCode {
case "passenger":
return FERRY;
case "aton":
return DEFAULT;
return BUOY;
case "law enforcement":
return GOV;
case "search and rescue":
@ -75,19 +60,19 @@ public enum SignalKindCode {
}
// vesselType 그룹 매칭
if (matchesAny(vt, "pilot boat", "anti pollution", "medical transport")) {
if (matchesAny(vt, "tug", "pilot boat", "tender", "anti pollution", "medical transport")) {
return GOV;
}
if (matchesAny(vt, "high speed craft", "wing in ground-effect")) {
return FERRY;
}
// 3. "Vessel" + extraInfo 조합
// 2. "Vessel" + extraInfo 조합
if ("vessel".equals(vt)) {
return resolveVesselExtraInfo(ei);
}
// 4. "N/A" + extraInfo 조합
// 3. "N/A" + extraInfo 조합
if ("n/a".equals(vt)) {
if (ei.startsWith("hazardous cat")) {
return CARGO;
@ -95,7 +80,7 @@ public enum SignalKindCode {
return DEFAULT;
}
// 5. fallback
// 4. fallback
return DEFAULT;
}
@ -106,32 +91,18 @@ public enum SignalKindCode {
if ("military operations".equals(extraInfo)) {
return GOV;
}
if (matchesAny(extraInfo, "towing", "towing (large)", "dredging/underwater ops", "diving operations")) {
return GOV;
}
if (matchesAny(extraInfo, "pleasure craft", "sailing", "n/a")) {
return FISHING;
}
if (extraInfo.startsWith("hazardous cat")) {
return CARGO;
}
return DEFAULT;
}
/**
* shipName에 '.' 또는 '_' 문자가 2개 이상 포함되면 부이/항로표지로 판정
*/
static boolean hasBuoyNamePattern(String shipName) {
if (shipName == null || shipName.isBlank()) {
return false;
}
int count = 0;
for (int i = 0; i < shipName.length(); i++) {
char c = shipName.charAt(i);
if (c == '.' || c == '_') {
count++;
if (count >= 2) {
return true;
}
}
}
return false;
}
private static boolean matchesAny(String value, String... candidates) {
for (String candidate : candidates) {
if (candidate.equals(value)) {

파일 보기

@ -1,45 +0,0 @@
package gc.mda.signal_batch.global.util;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import lombok.experimental.UtilityClass;
import java.util.List;
/**
* CompactVesselTrack의 Heap 점유량을 바이트 단위로 추정
*
* 포인트당 메모리 근거:
* double[2]: 32B (header 16B + data 16B) + ArrayList entry 8B = 40B
* String timestamp: ~48B (object 16B + char[] ~24B + ref 8B)
* Double speed: 24B (object 16B + double 8B)
* 합계: ~112B per point
*/
@UtilityClass
public class TrackMemoryEstimator {
private static final long BYTES_PER_POINT = 112L;
private static final long OBJECT_OVERHEAD = 300L;
public static long estimateTrackBytes(CompactVesselTrack track) {
if (track == null) return 0;
int points = track.getPointCount();
return OBJECT_OVERHEAD + (long) points * BYTES_PER_POINT;
}
public static long estimateListBytes(List<CompactVesselTrack> tracks) {
if (tracks == null || tracks.isEmpty()) return 0;
long total = 0;
for (CompactVesselTrack track : tracks) {
total += estimateTrackBytes(track);
}
return total;
}
/**
* 사전 추정: 평균 500포인트 기준
* days × vessels × 500 × 112B
*/
public static long estimateQueryBytes(int days, int estimatedVessels) {
return (long) days * estimatedVessels * 500 * BYTES_PER_POINT;
}
}

파일 보기

@ -122,18 +122,16 @@ public class VesselTrackToCompactConverter {
int pointCount = geometry.size();
double avgSpeed = pointCount > 0 ? totalDistance / Math.max(1, pointCount) * 60 : 0;
// 선박 정보 설정 (캐시에 이미 치환된 signalKindCode 사용)
// 선박 정보 설정
String shipName = null;
String shipType = null;
String shipKindCode = null;
if (vesselInfo != null) {
shipName = vesselInfo.getName();
shipType = vesselInfo.getVesselType();
shipKindCode = vesselInfo.getSignalKindCode() != null
? vesselInfo.getSignalKindCode()
: SignalKindCode.DEFAULT.getCode();
shipKindCode = SignalKindCode.resolve(vesselInfo.getVesselType(), vesselInfo.getExtraInfo()).getCode();
} else {
shipKindCode = SignalKindCode.DEFAULT.getCode();
shipKindCode = SignalKindCode.resolve(null, null).getCode();
}
String nationalCode = mmsi.length() >= 3 ? mmsi.substring(0, 3) : mmsi;

파일 보기

@ -71,19 +71,6 @@ public class StompTrackController {
}
};
// 세션 속성에서 CLIENT_IP, CLIENT_ID 추출
String clientIp = null;
String clientId = null;
Map<String, Object> sessionAttrs = headerAccessor.getSessionAttributes();
if (sessionAttrs != null) {
if (sessionAttrs.containsKey("CLIENT_IP")) {
clientIp = (String) sessionAttrs.get("CLIENT_IP");
}
if (sessionAttrs.containsKey("CLIENT_ID")) {
clientId = (String) sessionAttrs.get("CLIENT_ID");
}
}
// 비동기 스트리밍 시작 - 청크 모드 체크
if (request.isChunkedMode()) {
chunkedTrackStreamingService.streamChunkedTracks(
@ -91,9 +78,7 @@ public class StompTrackController {
queryId,
sessionId,
chunk -> sendChunkedDataToUser(userId, chunk),
statusCallback,
clientIp,
clientId
statusCallback
);
} else {
trackStreamingService.streamTracks(
@ -128,9 +113,10 @@ public class StompTrackController {
trackStreamingService.cancelQuery(queryId);
chunkedTrackStreamingService.cancelQuery(queryId);
activeSessions.remove(sessionId);
return QueryResponse.cancelled(queryId);
}
// 세션 없어도 취소 성공 반환 (idempotent 이미 완료/취소된 쿼리)
return QueryResponse.cancelled(queryId);
return QueryResponse.error(queryId, "Query not found");
}
/**

파일 보기

@ -316,11 +316,4 @@ public class ActiveQueryManager {
public int getMaxConcurrentGlobal() {
return maxConcurrentGlobal;
}
/**
* 대기열 타임아웃 ()
*/
public int getQueueTimeoutSeconds() {
return queueTimeoutSeconds;
}
}

파일 보기

@ -117,9 +117,6 @@ public class CacheTrackSimplifier {
track.setPointCount(afterZoom);
// 간소화 속도 재계산 (포인트 거리/시간 기반)
recalculateSpeeds(track);
// 처음 5개 선박 상세 로그 (debug 레벨)
if (simplifiedCount < 5) {
log.debug("[CacheSimplify] vessel={} original={} -> DP={} -> distTime={} -> zoom={} (avg={} kn)",
@ -142,43 +139,6 @@ public class CacheTrackSimplifier {
return tracks;
}
// L3 캐시 저장용: DP-only 사전 간소화
/**
* DP(Douglas-Peucker) 적용하는 사전 간소화 (L3 캐시 저장용).
* 방향 변화를 보존하여 어선 조업 패턴(원형, ㄹ자) 유지.
* 거리/시간 필터는 적용하지 않아 직선 구간만 제거.
*/
public void simplifyDpOnly(List<CompactVesselTrack> tracks, double dpTolerance) {
if (tracks == null || tracks.isEmpty()) return;
long startTime = System.currentTimeMillis();
int totalOriginal = 0;
int totalAfter = 0;
int simplifiedCount = 0;
for (CompactVesselTrack track : tracks) {
if (track.getGeometry() == null || track.getGeometry().size() <= 2) continue;
int before = track.getGeometry().size();
totalOriginal += before;
applyDouglasPeucker(track, dpTolerance);
recalculateSpeeds(track);
track.setPointCount(track.getGeometry().size());
totalAfter += track.getGeometry().size();
simplifiedCount++;
}
long elapsed = System.currentTimeMillis() - startTime;
if (simplifiedCount > 0) {
double reduction = (1 - (double) totalAfter / totalOriginal) * 100;
log.info("[DpPreSimplify] {} tracks, {} -> {} pts ({}% 감소), {}ms",
simplifiedCount, totalOriginal, totalAfter, Math.round(reduction), elapsed);
}
}
// 1단계: Douglas-Peucker (ST_Simplify 대체)
private void applyDouglasPeucker(CompactVesselTrack track, double tolerance) {
@ -452,55 +412,6 @@ public class CacheTrackSimplifier {
if (sampledSpd != null) track.setSpeeds(sampledSpd);
}
// 간소화 속도 재계산
/**
* 간소화된 포인트 속도 재계산.
* 간소화 남은 포인트에 대해 인접 좌표 Haversine 거리/시간차로 계산.
*/
private void recalculateSpeeds(CompactVesselTrack track) {
List<double[]> geometry = track.getGeometry();
List<String> timestamps = track.getTimestamps();
if (geometry == null || geometry.size() < 2 ||
timestamps == null || timestamps.size() != geometry.size()) {
return;
}
int size = geometry.size();
List<Double> speeds = new ArrayList<>(size);
speeds.add(0.0); // 포인트는 이전 포인트가 없으므로 0
for (int i = 1; i < size; i++) {
double[] prev = geometry.get(i - 1);
double[] curr = geometry.get(i);
try {
long prevTs = parseEpochSeconds(timestamps.get(i - 1));
long currTs = parseEpochSeconds(timestamps.get(i));
double timeDiffHours = (currTs - prevTs) / 3600.0;
if (timeDiffHours > 0) {
double distNm = calculateDistance(prev[1], prev[0], curr[1], curr[0]);
speeds.add(distNm / timeDiffHours); // knots
} else {
speeds.add(0.0);
}
} catch (Exception e) {
speeds.add(0.0);
}
}
track.setSpeeds(speeds);
}
private long parseEpochSeconds(String tsStr) {
if (tsStr == null) throw new IllegalArgumentException("null timestamp");
if (tsStr.matches("\\d{10,}")) {
return Long.parseLong(tsStr);
}
return LocalDateTime.parse(tsStr, TIMESTAMP_FORMATTER)
.atZone(java.time.ZoneId.systemDefault())
.toEpochSecond();
}
// 거리 계산 (Haversine, 해리 단위)
private double calculateDistance(double lat1, double lon1, double lat2, double lon2) {

파일 크기가 너무 크기때문에 변경 상태를 표시하지 않습니다. Load Diff

파일 보기

@ -40,13 +40,8 @@ public class DailyTrackCacheManager {
NOT_STARTED, LOADING, PARTIAL, READY, DISABLED
}
/** L3 사전 간소화 DP tolerance (~100m) — 항적 형상 유지하면서 직선 구간만 제거 */
private static final double L3_DP_TOLERANCE = 0.001;
private final DataSource queryDataSource;
private final DailyTrackCacheProperties cacheProperties;
private final TrackMemoryBudgetManager memoryBudgetManager;
private final CacheTrackSimplifier cacheTrackSimplifier;
// 날짜별 캐시 (D-1 ~ D-N)
private final ConcurrentHashMap<LocalDate, DailyTrackData> cache = new ConcurrentHashMap<>();
@ -59,13 +54,9 @@ public class DailyTrackCacheManager {
public DailyTrackCacheManager(
@Qualifier("queryDataSource") DataSource queryDataSource,
DailyTrackCacheProperties cacheProperties,
TrackMemoryBudgetManager memoryBudgetManager,
CacheTrackSimplifier cacheTrackSimplifier) {
DailyTrackCacheProperties cacheProperties) {
this.queryDataSource = queryDataSource;
this.cacheProperties = cacheProperties;
this.memoryBudgetManager = memoryBudgetManager;
this.cacheTrackSimplifier = cacheTrackSimplifier;
}
/**
@ -174,19 +165,13 @@ public class DailyTrackCacheManager {
DailyTrackData data = loadDay(targetDate);
if (data != null && data.getVesselCount() > 0) {
// 메모리 한도 체크 (DailyTrackCacheProperties 자체 한도)
// 메모리 한도 체크
if (totalMemory + data.getMemorySizeBytes() > maxMemoryBytes) {
log.warn("Cache memory limit reached: {}GB / {}GB. Stopping at D-{}",
totalMemory / (1024 * 1024 * 1024), cacheProperties.getMaxMemoryGb(), daysBack);
break;
}
// 메모리 예산 매니저에 등록
if (!memoryBudgetManager.registerCacheMemory(targetDate, data.getMemorySizeBytes())) {
log.warn("[MemoryBudget] 캐시 예산 초과로 D-{} ({}) 로드 중단", daysBack, targetDate);
break;
}
cache.put(targetDate, data);
totalMemory += data.getMemorySizeBytes();
loadedCount++;
@ -316,9 +301,8 @@ public class DailyTrackCacheManager {
double avgSpeed = acc.pointCount > 0 ? acc.totalDistance / Math.max(1, acc.pointCount) * 60 : 0;
// shipKindCode: 캐시 저장 치환된 사용 (DB fallback 포함)
String shipKindCode = acc.signalKindCode != null
? acc.signalKindCode : SignalKindCode.DEFAULT.getCode();
// shipKindCode 계산
String shipKindCode = SignalKindCode.resolve(acc.shipType, null).getCode();
// nationalCode 계산 (MMSI 3자리 = MID)
String nationalCode = acc.mmsi.length() >= 3 ? acc.mmsi.substring(0, 3) : acc.mmsi;
@ -343,23 +327,6 @@ public class DailyTrackCacheManager {
estimatedMemory += tracks.size() * 200L; // 객체 오버헤드
// DP 사전 간소화: 직선 구간만 제거, 방향 변화(어선 조업 패턴) 보존
long memoryBeforeDp = estimatedMemory;
List<CompactVesselTrack> trackList = new ArrayList<>(tracks.values());
cacheTrackSimplifier.simplifyDpOnly(trackList, L3_DP_TOLERANCE);
// 간소화 메모리 재추정
estimatedMemory = trackList.stream()
.mapToLong(t -> t.getPointCount() * 40L)
.sum();
estimatedMemory += tracks.size() * 200L; // 객체 오버헤드
if (memoryBeforeDp > 0) {
long reduction = memoryBeforeDp > 0 ? Math.round((1 - (double) estimatedMemory / memoryBeforeDp) * 100) : 0;
log.info("[DailyLoadDay] {} DP pre-simplification: {}MB -> {}MB ({}% reduction, tolerance={})",
date, memoryBeforeDp / (1024 * 1024), estimatedMemory / (1024 * 1024), reduction, L3_DP_TOLERANCE);
}
// STRtree 공간 인덱스 빌드
STRtree spatialIndex = buildSpatialIndex(tracks);
estimatedMemory += tracks.size() * 100L; // 인덱스 오버헤드
@ -454,76 +421,6 @@ public class DailyTrackCacheManager {
.collect(Collectors.toList());
}
/**
* 요청된 MMSI 키로 직접 O(1) 조회 dayTracks.get(mmsi) 호출
* 기존 getCachedTracksMultipleDays() 전체 스캔 대비 대폭 성능 개선.
* : 7일 × 100 MMSI = 700회 get() vs 7일 × 50K 선박 = 350K 엔트리 스캔
*/
public List<CompactVesselTrack> getCachedTracksForVessels(
List<LocalDate> dates, Set<String> mmsiKeys) {
if (mmsiKeys == null || mmsiKeys.isEmpty()) {
return Collections.emptyList();
}
Map<String, CompactVesselTrack.CompactVesselTrackBuilder> merged = new HashMap<>();
int lookupCount = 0;
int hitCount = 0;
for (LocalDate date : dates) {
DailyTrackData data = cache.get(date);
if (data == null) continue;
Map<String, CompactVesselTrack> dayTracks = data.getTracks();
for (String mmsi : mmsiKeys) {
CompactVesselTrack track = dayTracks.get(mmsi);
lookupCount++;
if (track == null) continue;
hitCount++;
CompactVesselTrack.CompactVesselTrackBuilder builder = merged.get(mmsi);
if (builder == null) {
builder = CompactVesselTrack.builder()
.vesselId(mmsi)
.nationalCode(track.getNationalCode())
.shipName(track.getShipName())
.shipType(track.getShipType())
.shipKindCode(track.getShipKindCode())
.geometry(new ArrayList<>(track.getGeometry()))
.timestamps(new ArrayList<>(track.getTimestamps()))
.speeds(new ArrayList<>(track.getSpeeds()))
.totalDistance(track.getTotalDistance())
.avgSpeed(track.getAvgSpeed())
.maxSpeed(track.getMaxSpeed())
.pointCount(track.getPointCount());
merged.put(mmsi, builder);
} else {
CompactVesselTrack existing = builder.build();
List<double[]> geo = new ArrayList<>(existing.getGeometry());
geo.addAll(track.getGeometry());
List<String> ts = new ArrayList<>(existing.getTimestamps());
ts.addAll(track.getTimestamps());
List<Double> sp = new ArrayList<>(existing.getSpeeds());
sp.addAll(track.getSpeeds());
builder.geometry(geo)
.timestamps(ts)
.speeds(sp)
.totalDistance(existing.getTotalDistance() + track.getTotalDistance())
.maxSpeed(Math.max(existing.getMaxSpeed(), track.getMaxSpeed()))
.pointCount(existing.getPointCount() + track.getPointCount());
}
}
}
log.info("[CACHE-MONITOR] L3.getCachedTracksForVessels: dates={}, requestedMmsi={}, lookups={}, hits={}, resultVessels={}",
dates.size(), mmsiKeys.size(), lookupCount, hitCount, merged.size());
return merged.values().stream()
.map(CompactVesselTrack.CompactVesselTrackBuilder::build)
.collect(Collectors.toList());
}
/**
* 요청 범위를 캐시 구간 / DB 구간으로 분리
*/
@ -636,7 +533,6 @@ public class DailyTrackCacheManager {
try {
DailyTrackData data = loadDay(yesterday);
if (data != null && data.getVesselCount() > 0) {
memoryBudgetManager.registerCacheMemory(yesterday, data.getMemorySizeBytes());
cache.put(yesterday, data);
log.info("Cache refreshed for {}: {} vessels, {} MB",
yesterday, data.getVesselCount(), data.getMemorySizeBytes() / (1024 * 1024));
@ -654,7 +550,6 @@ public class DailyTrackCacheManager {
for (LocalDate d : toRemove) {
DailyTrackData removed = cache.remove(d);
if (removed != null) {
memoryBudgetManager.releaseCacheMemory(d);
log.info("Evicted cache for {}: {} vessels, {} MB",
d, removed.getVesselCount(), removed.getMemorySizeBytes() / (1024 * 1024));
}
@ -747,7 +642,7 @@ public class DailyTrackCacheManager {
try (Connection conn = queryDataSource.getConnection()) {
String placeholders = batch.stream().map(id -> "?").collect(Collectors.joining(","));
String sql = "SELECT mmsi, name as ship_nm, vessel_type as ship_ty, signal_kind_code " +
String sql = "SELECT mmsi, name as ship_nm, vessel_type as ship_ty " +
"FROM signal.t_ais_position " +
"WHERE mmsi IN (" + placeholders + ")";
@ -763,7 +658,6 @@ public class DailyTrackCacheManager {
if (acc != null) {
acc.shipName = rs.getString("ship_nm");
acc.shipType = rs.getString("ship_ty");
acc.signalKindCode = rs.getString("signal_kind_code");
enriched++;
}
}
@ -807,7 +701,6 @@ public class DailyTrackCacheManager {
String mmsi;
String shipName;
String shipType;
String signalKindCode;
List<double[]> geometry = new ArrayList<>(500);
List<String> timestamps = new ArrayList<>(500);
List<Double> speeds = new ArrayList<>(500);

파일 보기

@ -1,300 +0,0 @@
package gc.mda.signal_batch.global.websocket.service;
import gc.mda.signal_batch.global.config.TrackMemoryBudgetProperties;
import gc.mda.signal_batch.global.exception.MemoryBudgetExceededException;
import jakarta.annotation.PostConstruct;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import java.time.LocalDate;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
/**
* 항적 데이터 메모리 예산 관리자
*
* 캐시 영역과 쿼리 영역의 메모리를 논리적으로 파티셔닝하여
* 대형 쿼리가 배치 Job/캐시를 압박하는 것을 방지.
*
* 쿼리 예산: ReentrantLock(fair=true) + Condition 기반 FIFO 대기 .
* 캐시 예산: AtomicLong 기반 즉시 등록/해제.
*/
@Slf4j
@Service
public class TrackMemoryBudgetManager {
@Getter
private final TrackMemoryBudgetProperties properties;
// 캐시 메모리 추적
private final AtomicLong cacheUsedBytes = new AtomicLong(0);
private final ConcurrentHashMap<String, Long> cacheAllocations = new ConcurrentHashMap<>();
// 쿼리 메모리 추적
private final AtomicLong queryUsedBytes = new AtomicLong(0);
private final ConcurrentHashMap<String, Long> queryAllocations = new ConcurrentHashMap<>();
private final AtomicInteger waitingQueryCount = new AtomicInteger(0);
// FIFO 대기 메커니즘
private final ReentrantLock queryLock = new ReentrantLock(true); // fair=true
private final Condition queryBudgetAvailable = queryLock.newCondition();
// 로그 중복 방지
private volatile long lastPressureLogTime = 0;
public TrackMemoryBudgetManager(TrackMemoryBudgetProperties properties) {
this.properties = properties;
}
@PostConstruct
public void init() {
log.info("TrackMemoryBudgetManager 초기화 — total: {}GB, cache: {}GB, query: {}GB, maxSingleQuery: {}GB, correctionFactor: {}",
properties.getTotalBudgetGb(), properties.getCacheBudgetGb(),
properties.getQueryBudgetGb(), properties.getMaxSingleQueryGb(),
properties.getEstimationCorrectionFactor());
}
// 캐시 메모리 관리
/**
* 캐시 메모리 등록 (날짜 기반 L3 DailyTrackCache)
* @return true: 예산 등록 성공, false: 예산 초과
*/
public boolean registerCacheMemory(LocalDate date, long bytes) {
return registerCacheMemory("daily::" + date, bytes);
}
/**
* 캐시 메모리 등록 ( 기반 L1/L2 Caffeine 버킷)
*/
public boolean registerCacheMemory(String key, long bytes) {
long budgetBytes = (long) properties.getCacheBudgetGb() * 1024 * 1024 * 1024;
long currentUsed = cacheUsedBytes.get();
if (currentUsed + bytes > budgetBytes) {
log.warn("[MemoryBudget] 캐시 예산 초과: key={}, requested={}MB, used={}MB, budget={}MB",
key, bytes / (1024 * 1024), currentUsed / (1024 * 1024), budgetBytes / (1024 * 1024));
return false;
}
Long previous = cacheAllocations.put(key, bytes);
if (previous != null) {
cacheUsedBytes.addAndGet(bytes - previous);
} else {
cacheUsedBytes.addAndGet(bytes);
}
return true;
}
/**
* 캐시 메모리 해제 (날짜 기반)
*/
public void releaseCacheMemory(LocalDate date) {
releaseCacheMemory("daily::" + date);
}
/**
* 캐시 메모리 해제 ( 기반)
*/
public void releaseCacheMemory(String key) {
Long released = cacheAllocations.remove(key);
if (released != null) {
cacheUsedBytes.addAndGet(-released);
}
}
public long getAvailableCacheBudget() {
long budgetBytes = (long) properties.getCacheBudgetGb() * 1024 * 1024 * 1024;
return Math.max(0, budgetBytes - cacheUsedBytes.get());
}
// 쿼리 메모리 관리 (FIFO 대기 )
/**
* 쿼리 메모리 예약 예산 부족 FIFO 대기
*
* @param queryId 쿼리 식별자
* @param estimatedBytes 추정 메모리 (보정 raw )
* @param maxWaitMs 최대 대기 시간 (밀리초)
* @throws MemoryBudgetExceededException 단일 쿼리 상한 초과 또는 타임아웃
*/
public void reserveQueryMemory(String queryId, long estimatedBytes, long maxWaitMs) {
long correctedBytes = applyCorrection(estimatedBytes);
long maxSingleBytes = (long) properties.getMaxSingleQueryGb() * 1024 * 1024 * 1024;
// 단일 쿼리 상한 체크
if (correctedBytes > maxSingleBytes) {
throw new MemoryBudgetExceededException(
String.format("단일 쿼리 메모리 상한 초과: estimated=%dMB, max=%dMB",
correctedBytes / (1024 * 1024), maxSingleBytes / (1024 * 1024)));
}
queryLock.lock();
try {
// 즉시 예약 가능한지 확인
if (canReserveQuery(correctedBytes)) {
doReserve(queryId, correctedBytes);
return;
}
// 대기 진입
waitingQueryCount.incrementAndGet();
long deadline = System.nanoTime() + maxWaitMs * 1_000_000L;
try {
while (!canReserveQuery(correctedBytes)) {
long remainingNanos = deadline - System.nanoTime();
if (remainingNanos <= 0) {
throw new MemoryBudgetExceededException(
String.format("쿼리 메모리 대기 타임아웃: %dms, queryUsed=%dMB, budget=%dMB",
maxWaitMs, queryUsedBytes.get() / (1024 * 1024),
(long) properties.getQueryBudgetGb() * 1024));
}
queryBudgetAvailable.awaitNanos(remainingNanos);
}
doReserve(queryId, correctedBytes);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new MemoryBudgetExceededException("쿼리 메모리 대기 중 인터럽트 발생");
} finally {
waitingQueryCount.decrementAndGet();
}
} finally {
queryLock.unlock();
}
}
/**
* 쿼리 메모리 해제 대기 쿼리 시그널
*/
public void releaseQueryMemory(String queryId) {
Long released = queryAllocations.remove(queryId);
if (released != null) {
queryUsedBytes.addAndGet(-released);
queryLock.lock();
try {
queryBudgetAvailable.signalAll();
} finally {
queryLock.unlock();
}
log.debug("[MemoryBudget] 쿼리 메모리 해제: queryId={}, released={}MB, remaining={}MB",
queryId, released / (1024 * 1024), queryUsedBytes.get() / (1024 * 1024));
}
}
/**
* 쿼리 메모리 중간 업데이트 (실제 사용량이 추정과 다를 )
*/
public void updateQueryMemory(String queryId, long actualBytes) {
long corrected = applyCorrection(actualBytes);
Long previous = queryAllocations.put(queryId, corrected);
if (previous != null) {
queryUsedBytes.addAndGet(corrected - previous);
}
}
// 모니터링
/**
* 예산 현황 (모니터링 API용)
*/
public Map<String, Object> getBudgetStatus() {
Map<String, Object> status = new LinkedHashMap<>();
long cacheUsed = cacheUsedBytes.get();
long queryUsed = queryUsedBytes.get();
long totalUsed = cacheUsed + queryUsed;
long cacheBudget = (long) properties.getCacheBudgetGb() * 1024 * 1024 * 1024;
long queryBudget = (long) properties.getQueryBudgetGb() * 1024 * 1024 * 1024;
// 전체
Map<String, Object> total = new LinkedHashMap<>();
total.put("totalGb", properties.getTotalBudgetGb());
total.put("usedMb", totalUsed / (1024 * 1024));
total.put("usagePercent", String.format("%.1f", totalUsed * 100.0 / ((long) properties.getTotalBudgetGb() * 1024 * 1024 * 1024)));
total.put("status", getUsageStatus(totalUsed, (long) properties.getTotalBudgetGb() * 1024 * 1024 * 1024));
status.put("totalBudget", total);
// 캐시
Map<String, Object> cacheInfo = new LinkedHashMap<>();
cacheInfo.put("budgetGb", properties.getCacheBudgetGb());
cacheInfo.put("usedMb", cacheUsed / (1024 * 1024));
cacheInfo.put("usagePercent", cacheBudget > 0 ? String.format("%.1f", cacheUsed * 100.0 / cacheBudget) : "0.0");
cacheInfo.put("allocations", cacheAllocations.size());
status.put("cacheBudget", cacheInfo);
// 쿼리
Map<String, Object> queryInfo = new LinkedHashMap<>();
queryInfo.put("budgetGb", properties.getQueryBudgetGb());
queryInfo.put("usedMb", queryUsed / (1024 * 1024));
queryInfo.put("usagePercent", queryBudget > 0 ? String.format("%.1f", queryUsed * 100.0 / queryBudget) : "0.0");
queryInfo.put("activeReservations", queryAllocations.size());
queryInfo.put("waitingCount", waitingQueryCount.get());
status.put("queryBudget", queryInfo);
// JVM
Runtime runtime = Runtime.getRuntime();
long usedHeap = runtime.totalMemory() - runtime.freeMemory();
long maxHeap = runtime.maxMemory();
Map<String, Object> heap = new LinkedHashMap<>();
heap.put("usedMb", usedHeap / (1024 * 1024));
heap.put("maxMb", maxHeap / (1024 * 1024));
heap.put("usagePercent", String.format("%.1f", usedHeap * 100.0 / maxHeap));
status.put("heapInfo", heap);
return status;
}
public boolean isBudgetPressureHigh() {
long totalUsed = cacheUsedBytes.get() + queryUsedBytes.get();
long totalBudget = (long) properties.getTotalBudgetGb() * 1024 * 1024 * 1024;
double ratio = (double) totalUsed / totalBudget;
if (ratio >= properties.getWarningThreshold()) {
logBudgetPressure(ratio);
return true;
}
return false;
}
// 내부 메서드
private boolean canReserveQuery(long bytes) {
long budgetBytes = (long) properties.getQueryBudgetGb() * 1024 * 1024 * 1024;
return queryUsedBytes.get() + bytes <= budgetBytes;
}
private void doReserve(String queryId, long correctedBytes) {
queryAllocations.put(queryId, correctedBytes);
queryUsedBytes.addAndGet(correctedBytes);
log.debug("[MemoryBudget] 쿼리 메모리 예약: queryId={}, reserved={}MB, queryTotal={}MB",
queryId, correctedBytes / (1024 * 1024), queryUsedBytes.get() / (1024 * 1024));
}
private long applyCorrection(long rawEstimate) {
return (long) (rawEstimate * properties.getEstimationCorrectionFactor());
}
private String getUsageStatus(long used, long total) {
if (total == 0) return "UNKNOWN";
double ratio = (double) used / total;
if (ratio >= properties.getCriticalThreshold()) return "CRITICAL";
if (ratio >= properties.getWarningThreshold()) return "WARNING";
return "NORMAL";
}
private void logBudgetPressure(double ratio) {
long now = System.currentTimeMillis();
if (now - lastPressureLogTime > 5000) {
lastPressureLogTime = now;
log.warn("[MemoryBudget] 예산 압박: usage={}, cache={}MB, query={}MB, waiting={}",
String.format("%.1f%%", ratio * 100),
cacheUsedBytes.get() / (1024 * 1024),
queryUsedBytes.get() / (1024 * 1024),
waitingQueryCount.get());
}
}
}

파일 보기

@ -5,7 +5,6 @@ import gc.mda.signal_batch.batch.reader.FiveMinTrackCache;
import gc.mda.signal_batch.batch.reader.HourlyTrackCache;
import gc.mda.signal_batch.domain.vessel.service.VesselLatestPositionCache;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import gc.mda.signal_batch.global.websocket.service.TrackMemoryBudgetManager;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.extern.slf4j.Slf4j;
@ -46,9 +45,6 @@ public class CacheMonitoringController {
@Autowired(required = false)
private VesselLatestPositionCache latestPositionCache;
@Autowired
private TrackMemoryBudgetManager memoryBudgetManager;
/**
* 캐시 통계 조회 (Dashboard 표시용 전체 캐시 집계)
*/
@ -193,13 +189,4 @@ public class CacheMonitoringController {
health.put("latestPosition", latestPositionCache != null ? "UP" : "DISABLED");
return ResponseEntity.ok(health);
}
/**
* 메모리 예산 현황 (캐시 + 쿼리 파티셔닝 + JVM )
*/
@GetMapping("/budget")
@Operation(summary = "메모리 예산 현황", description = "캐시/쿼리 메모리 예산 사용량, 대기 큐, JVM 힙 정보를 조회합니다")
public ResponseEntity<Map<String, Object>> getMemoryBudgetStatus() {
return ResponseEntity.ok(memoryBudgetManager.getBudgetStatus());
}
}

파일 보기

@ -1,210 +0,0 @@
package gc.mda.signal_batch.monitoring.controller;
import gc.mda.signal_batch.monitoring.service.QueryMetricsService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.*;
import java.util.*;
/**
* 쿼리 메트릭 조회 API
*
* WebSocket/REST 쿼리 실행 이력 성능 통계를 제공한다.
* ApiMetrics 프론트엔드 페이지의 데이터 소스.
*/
@RestController
@RequestMapping("/api/monitoring/query-metrics")
@Tag(name = "Query Metrics", description = "쿼리 실행 메트릭 조회 API")
public class QueryMetricsController {
private final QueryMetricsService queryMetricsService;
private final JdbcTemplate queryJdbcTemplate;
private static final Set<String> ALLOWED_SORT_COLUMNS = Set.of(
"created_at", "elapsed_ms", "unique_vessels", "total_points"
);
public QueryMetricsController(
QueryMetricsService queryMetricsService,
@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryMetricsService = queryMetricsService;
this.queryJdbcTemplate = queryJdbcTemplate;
}
@GetMapping
@Operation(summary = "최근 쿼리 메트릭 조회", description = "최근 N건의 쿼리 실행 메트릭을 조회합니다")
public ResponseEntity<List<Map<String, Object>>> getRecentMetrics(
@RequestParam(defaultValue = "50") int limit) {
return ResponseEntity.ok(queryMetricsService.getRecentMetrics(Math.min(limit, 200)));
}
@GetMapping("/stats")
@Operation(summary = "쿼리 메트릭 통계", description = "기간별 쿼리 성능 통계 (평균 응답시간, 캐시 비율, 느린 쿼리 등)")
public ResponseEntity<Map<String, Object>> getStats(
@RequestParam(defaultValue = "7") int days) {
return ResponseEntity.ok(queryMetricsService.getStats(Math.min(days, 90)));
}
@GetMapping("/history")
@Operation(summary = "쿼리 이력 조회 (페이지네이션)", description = "필터 + 서버사이드 페이지네이션")
public Map<String, Object> getQueryHistory(
@Parameter(description = "쿼리 유형 (WEBSOCKET, REST_V2)") @RequestParam(required = false) String queryType,
@Parameter(description = "데이터 경로 (CACHE, DB, HYBRID)") @RequestParam(required = false) String dataPath,
@Parameter(description = "상태 (COMPLETED, CANCELLED, ERROR, TIMEOUT)") @RequestParam(required = false) String status,
@Parameter(description = "응답시간 최소 (ms)") @RequestParam(required = false) Integer elapsedMsMin,
@Parameter(description = "응답시간 최대 (ms)") @RequestParam(required = false) Integer elapsedMsMax,
@Parameter(description = "페이지 번호 (0부터)") @RequestParam(defaultValue = "0") int page,
@Parameter(description = "페이지 크기") @RequestParam(defaultValue = "20") int size,
@Parameter(description = "정렬 컬럼") @RequestParam(defaultValue = "created_at") String sortBy,
@Parameter(description = "정렬 방향 (asc, desc)") @RequestParam(defaultValue = "desc") String sortDir) {
if (!ALLOWED_SORT_COLUMNS.contains(sortBy)) {
sortBy = "created_at";
}
String direction = "asc".equalsIgnoreCase(sortDir) ? "ASC" : "DESC";
size = Math.min(size, 100);
StringBuilder where = new StringBuilder("WHERE 1=1");
List<Object> params = new ArrayList<>();
if (queryType != null && !queryType.isEmpty()) {
where.append(" AND query_type = ?");
params.add(queryType);
}
if (dataPath != null && !dataPath.isEmpty()) {
where.append(" AND data_path = ?");
params.add(dataPath);
}
if (status != null && !status.isEmpty()) {
where.append(" AND status = ?");
params.add(status);
}
if (elapsedMsMin != null) {
where.append(" AND elapsed_ms >= ?");
params.add(elapsedMsMin);
}
if (elapsedMsMax != null) {
where.append(" AND elapsed_ms <= ?");
params.add(elapsedMsMax);
}
String whereClause = where.toString();
// COUNT 쿼리
String countSql = "SELECT COUNT(*) FROM signal.t_query_metrics " + whereClause;
Integer totalElements = queryJdbcTemplate.queryForObject(countSql, Integer.class, params.toArray());
if (totalElements == null) totalElements = 0;
// 데이터 쿼리
String dataSql = """
SELECT id, query_id, query_type, created_at, data_path, status,
zoom_level, requested_mmsi, unique_vessels, total_tracks,
total_points, points_after_simplify, total_chunks,
response_bytes, elapsed_ms, db_query_ms, simplify_ms,
cache_hit_days, db_query_days, client_ip, client_id
FROM signal.t_query_metrics
""" + whereClause +
" ORDER BY " + sortBy + " " + direction +
" LIMIT ? OFFSET ?";
List<Object> dataParams = new ArrayList<>(params);
dataParams.add(size);
dataParams.add(page * size);
List<Map<String, Object>> content = queryJdbcTemplate.queryForList(dataSql, dataParams.toArray());
Map<String, Object> result = new LinkedHashMap<>();
result.put("content", content);
result.put("totalElements", totalElements);
result.put("totalPages", (int) Math.ceil((double) totalElements / size));
result.put("currentPage", page);
result.put("pageSize", size);
return result;
}
@GetMapping("/summary")
@Operation(summary = "쿼리 메트릭 요약", description = "최근 N시간 요약 통계 (P95 포함)")
public Map<String, Object> getSummary(
@Parameter(description = "조회 기간 (시간)") @RequestParam(defaultValue = "24") int hours) {
String sql = """
SELECT
COUNT(*) as total_queries,
COALESCE(AVG(elapsed_ms), 0) as avg_elapsed_ms,
COALESCE(PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY elapsed_ms), 0) as p95_elapsed_ms,
COALESCE(MAX(elapsed_ms), 0) as max_elapsed_ms,
COUNT(CASE WHEN query_type = 'WEBSOCKET' THEN 1 END) as ws_count,
COUNT(CASE WHEN query_type LIKE 'REST%%' THEN 1 END) as rest_count,
COUNT(CASE WHEN data_path = 'CACHE' THEN 1 END) as cache_only_count,
COUNT(CASE WHEN data_path = 'DB' THEN 1 END) as db_only_count,
COUNT(CASE WHEN data_path = 'HYBRID' THEN 1 END) as hybrid_count,
COUNT(CASE WHEN status = 'COMPLETED' THEN 1 END) as completed_count,
COUNT(CASE WHEN status != 'COMPLETED' THEN 1 END) as failed_count,
COALESCE(AVG(unique_vessels), 0) as avg_vessels,
COALESCE(AVG(total_points), 0) as avg_points_before,
COALESCE(AVG(points_after_simplify), 0) as avg_points_after,
COALESCE(AVG(response_bytes), 0) as avg_response_size_bytes
FROM signal.t_query_metrics
WHERE created_at >= NOW() - INTERVAL '%d hours'
""".formatted(Math.min(hours, 720));
return queryJdbcTemplate.queryForMap(sql);
}
@GetMapping("/timeseries")
@Operation(summary = "쿼리 메트릭 시계열", description = "시간별/일별 버킷 집계 + Top 10 클라이언트")
public Map<String, Object> getTimeSeries(
@Parameter(description = "조회 기간 (일)") @RequestParam(defaultValue = "7") int days,
@Parameter(description = "Top 클라이언트 그룹 기준 (ip | id)") @RequestParam(defaultValue = "ip") String groupBy) {
days = Math.min(days, 90);
String granularity = days <= 7 ? "HOURLY" : "DAILY";
String bucketExpr = days <= 7 ? "DATE_TRUNC('hour', created_at)" : "DATE(created_at)";
String bucketSql = """
SELECT %s AS bucket,
COUNT(*) AS query_count,
COALESCE(AVG(elapsed_ms), 0) AS avg_elapsed_ms,
COALESCE(MAX(elapsed_ms), 0) AS max_elapsed_ms,
COALESCE(AVG(response_bytes), 0) AS avg_response_bytes,
COUNT(CASE WHEN query_type = 'WEBSOCKET' THEN 1 END) AS ws_count,
COUNT(CASE WHEN query_type LIKE 'REST%%' THEN 1 END) AS rest_count,
COUNT(CASE WHEN data_path = 'CACHE' THEN 1 END) AS cache_count,
COUNT(CASE WHEN data_path = 'DB' THEN 1 END) AS db_count,
COUNT(CASE WHEN data_path = 'HYBRID' THEN 1 END) AS hybrid_count
FROM signal.t_query_metrics
WHERE created_at >= NOW() - INTERVAL '%d days'
GROUP BY bucket ORDER BY bucket
""".formatted(bucketExpr, days);
List<Map<String, Object>> buckets = queryJdbcTemplate.queryForList(bucketSql);
boolean groupById = "id".equalsIgnoreCase(groupBy);
String clientColumn = groupById ? "client_id" : "client_ip";
String topClientsSql = """
SELECT %s AS client, COUNT(*) AS query_count,
COALESCE(AVG(elapsed_ms), 0) AS avg_elapsed_ms
FROM signal.t_query_metrics
WHERE created_at >= NOW() - INTERVAL '%d days'
AND %s IS NOT NULL
GROUP BY %s
ORDER BY query_count DESC LIMIT 10
""".formatted(clientColumn, days, clientColumn, clientColumn);
List<Map<String, Object>> topClients = queryJdbcTemplate.queryForList(topClientsSql);
Map<String, Object> result = new LinkedHashMap<>();
result.put("buckets", buckets);
result.put("topClients", topClients);
result.put("granularity", granularity);
result.put("groupBy", groupById ? "id" : "ip");
return result;
}
}

파일 보기

@ -1,153 +0,0 @@
package gc.mda.signal_batch.monitoring.service;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Service;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ConcurrentLinkedQueue;
/**
* 쿼리 메트릭 벌크 INSERT 버퍼 서비스
*
* ConcurrentLinkedQueue로 lock-free 수집 10초 간격으로 batchUpdate.
* 1요청 = 1레코드 보장: WebSocket은 쿼리 완료 1회, REST는 호출당 1회 enqueue.
*/
@Slf4j
@Service
public class QueryMetricsBufferService {
private static final int MAX_FLUSH_SIZE = 500;
private static final String INSERT_SQL = """
INSERT INTO signal.t_query_metrics (
query_id, session_id, query_type, created_at,
start_time, end_time, zoom_level, viewport_bounds, requested_mmsi,
data_path, cache_hit_days, db_query_days, db_conn_total,
unique_vessels, total_tracks, total_points, points_after_simplify,
total_chunks, response_bytes,
elapsed_ms, db_query_ms, simplify_ms, backpressure_events,
status, client_ip, client_id
) VALUES (
?, ?, ?, now(),
?, ?, ?, ?, ?,
?, ?, ?, ?,
?, ?, ?, ?,
?, ?,
?, ?, ?, ?,
?, ?, ?
)
""";
private final JdbcTemplate queryJdbcTemplate;
private final ConcurrentLinkedQueue<QueryMetricsService.QueryMetric> buffer = new ConcurrentLinkedQueue<>();
public QueryMetricsBufferService(
@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryJdbcTemplate = queryJdbcTemplate;
}
@PostConstruct
void ensureClientIpColumn() {
try {
queryJdbcTemplate.execute("""
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'signal' AND table_name = 't_query_metrics' AND column_name = 'client_ip'
) THEN
ALTER TABLE signal.t_query_metrics ADD COLUMN client_ip VARCHAR(45);
CREATE INDEX IF NOT EXISTS idx_query_metrics_client_ip ON signal.t_query_metrics(client_ip, created_at);
END IF;
END $$
""");
log.info("t_query_metrics client_ip column ensured");
} catch (Exception e) {
log.warn("Failed to ensure client_ip column: {}", e.getMessage());
}
}
@PostConstruct
void ensureClientIdColumn() {
try {
queryJdbcTemplate.execute("""
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'signal' AND table_name = 't_query_metrics' AND column_name = 'client_id'
) THEN
ALTER TABLE signal.t_query_metrics ADD COLUMN client_id VARCHAR(100);
CREATE INDEX IF NOT EXISTS idx_query_metrics_client_id ON signal.t_query_metrics(client_id, created_at);
END IF;
END $$
""");
log.info("t_query_metrics client_id column ensured");
} catch (Exception e) {
log.warn("Failed to ensure client_id column: {}", e.getMessage());
}
}
/**
* 메트릭 레코드를 버퍼에 추가 (lock-free)
*/
public void enqueue(QueryMetricsService.QueryMetric metric) {
if (metric == null) return;
buffer.offer(metric);
}
/**
* 10초 간격으로 버퍼 flush batchUpdate
*/
@Scheduled(fixedDelay = 10_000)
public void flush() {
if (buffer.isEmpty()) return;
List<QueryMetricsService.QueryMetric> batch = new ArrayList<>(MAX_FLUSH_SIZE);
QueryMetricsService.QueryMetric metric;
while (batch.size() < MAX_FLUSH_SIZE && (metric = buffer.poll()) != null) {
batch.add(metric);
}
if (batch.isEmpty()) return;
try {
List<Object[]> args = batch.stream()
.map(this::toArgs)
.toList();
queryJdbcTemplate.batchUpdate(INSERT_SQL, args);
log.debug("Flushed {} query metrics to DB (remaining: {})", batch.size(), buffer.size());
} catch (Exception e) {
log.warn("Failed to flush query metrics ({} records): {}", batch.size(), e.getMessage());
}
}
private Object[] toArgs(QueryMetricsService.QueryMetric m) {
return new Object[]{
m.getQueryId(), m.getSessionId(), m.getQueryType(),
m.getStartTime() != null ? Timestamp.valueOf(m.getStartTime()) : null,
m.getEndTime() != null ? Timestamp.valueOf(m.getEndTime()) : null,
m.getZoomLevel(), m.getViewportBounds(), m.getRequestedMmsi(),
m.getDataPath(), m.getCacheHitDays(), m.getDbQueryDays(), m.getDbConnTotal(),
m.getUniqueVessels(), m.getTotalTracks(), m.getTotalPoints(), m.getPointsAfterSimplify(),
m.getTotalChunks(), m.getResponseBytes(),
m.getElapsedMs(), m.getDbQueryMs(), m.getSimplifyMs(), m.getBackpressureEvents(),
m.getStatus(), m.getClientIp(), m.getClientId()
};
}
/**
* 현재 버퍼 크기 (모니터링용)
*/
public int getBufferSize() {
return buffer.size();
}
}

파일 보기

@ -1,136 +0,0 @@
package gc.mda.signal_batch.monitoring.service;
import lombok.Builder;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Service;
import java.time.LocalDateTime;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
/**
* 쿼리 실행 메트릭 조회 서비스
*
* 적재는 QueryMetricsBufferService가 담당 (ConcurrentLinkedQueue + 10초 batch flush).
* 서비스는 조회 전용 + QueryMetric DTO 정의.
*/
@Slf4j
@Service
public class QueryMetricsService {
private final JdbcTemplate queryJdbcTemplate;
public QueryMetricsService(@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryJdbcTemplate = queryJdbcTemplate;
}
/**
* 최근 쿼리 메트릭 조회
*/
public List<Map<String, Object>> getRecentMetrics(int limit) {
return queryJdbcTemplate.queryForList("""
SELECT query_id, session_id, query_type, created_at,
start_time, end_time, zoom_level, viewport_bounds,
data_path, cache_hit_days, db_query_days, db_conn_total,
unique_vessels, total_tracks, total_points, points_after_simplify,
total_chunks, response_bytes,
elapsed_ms, db_query_ms, simplify_ms, backpressure_events, status
FROM signal.t_query_metrics
ORDER BY created_at DESC
LIMIT ?
""", limit);
}
/**
* 기간별 쿼리 메트릭 통계
*/
public Map<String, Object> getStats(int days) {
Map<String, Object> stats = new LinkedHashMap<>();
// 전체 통계
Map<String, Object> summary = queryJdbcTemplate.queryForMap("""
SELECT
COUNT(*) AS total_queries,
ROUND(AVG(elapsed_ms)) AS avg_elapsed_ms,
MAX(elapsed_ms) AS max_elapsed_ms,
ROUND(AVG(unique_vessels)) AS avg_vessels,
ROUND(AVG(total_points)) AS avg_points,
SUM(CASE WHEN data_path = 'CACHE' THEN 1 ELSE 0 END) AS cache_only,
SUM(CASE WHEN data_path = 'HYBRID' THEN 1 ELSE 0 END) AS hybrid,
SUM(CASE WHEN data_path = 'DB' THEN 1 ELSE 0 END) AS db_only,
SUM(CASE WHEN status = 'COMPLETED' THEN 1 ELSE 0 END) AS completed,
SUM(CASE WHEN status = 'CANCELLED' THEN 1 ELSE 0 END) AS cancelled,
SUM(CASE WHEN status = 'ERROR' THEN 1 ELSE 0 END) AS errors,
SUM(CASE WHEN status = 'TIMEOUT' THEN 1 ELSE 0 END) AS timeouts
FROM signal.t_query_metrics
WHERE created_at >= now() - INTERVAL '%d days'
""".formatted(days));
stats.put("summary", summary);
// 일별 추이
List<Map<String, Object>> daily = queryJdbcTemplate.queryForList("""
SELECT
DATE(created_at) AS date,
COUNT(*) AS query_count,
ROUND(AVG(elapsed_ms)) AS avg_elapsed_ms,
ROUND(AVG(unique_vessels)) AS avg_vessels,
SUM(CASE WHEN status = 'COMPLETED' THEN 1 ELSE 0 END) AS completed,
SUM(CASE WHEN status != 'COMPLETED' THEN 1 ELSE 0 END) AS failed
FROM signal.t_query_metrics
WHERE created_at >= now() - INTERVAL '%d days'
GROUP BY DATE(created_at)
ORDER BY date DESC
""".formatted(days));
stats.put("dailyTrend", daily);
// 느린 쿼리 TOP 10
List<Map<String, Object>> slowQueries = queryJdbcTemplate.queryForList("""
SELECT query_id, created_at, elapsed_ms, unique_vessels, total_points,
data_path, db_conn_total, zoom_level, status
FROM signal.t_query_metrics
WHERE created_at >= now() - INTERVAL '%d days'
ORDER BY elapsed_ms DESC
LIMIT 10
""".formatted(days));
stats.put("slowQueries", slowQueries);
return stats;
}
/**
* 쿼리 메트릭 데이터 클래스
*/
@Getter
@Builder
public static class QueryMetric {
private final String queryId;
private final String sessionId;
private final String queryType;
private final LocalDateTime startTime;
private final LocalDateTime endTime;
private final Integer zoomLevel;
private final String viewportBounds;
private final int requestedMmsi;
private final String dataPath;
private final int cacheHitDays;
private final int dbQueryDays;
private final int dbConnTotal;
private final int uniqueVessels;
private final int totalTracks;
private final int totalPoints;
private final int pointsAfterSimplify;
private final int totalChunks;
private final long responseBytes;
private final long elapsedMs;
private final long dbQueryMs;
private final long simplifyMs;
private final int backpressureEvents;
private final String status;
private final String clientIp;
private final String clientId;
}
}

파일 보기

@ -48,7 +48,7 @@ spring:
validation-timeout: 5000
leak-detection-threshold: 60000 # 커넥션 누수 감지 (60초)
# PostGIS 함수를 위해 public 스키마를 search_path에 명시적으로 추가
connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public, pg_catalog; SET work_mem = '256MB'; SET synchronous_commit = 'off';"
connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public, pg_catalog;"
statement-cache-size: 250
data-source-properties:
prepareThreshold: 3
@ -68,7 +68,7 @@ spring:
idle-timeout: 600000
max-lifetime: 1800000
leak-detection-threshold: 60000 # 커넥션 누수 감지 (60초)
connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public; SET synchronous_commit = 'off';"
connection-init-sql: "SET TIME ZONE 'Asia/Seoul'; SET search_path TO signal, public;"
# Request 크기 설정
servlet:
@ -87,12 +87,19 @@ spring:
logging:
level:
root: INFO
gc.mda.signal_batch: INFO
gc.mda.signal_batch.monitoring: INFO
org.springframework.batch: WARN
gc.mda.signal_batch: DEBUG
gc.mda.signal_batch.global.util: INFO
gc.mda.signal_batch.global.websocket.service: INFO
gc.mda.signal_batch.batch.writer: INFO
gc.mda.signal_batch.batch.reader: INFO
gc.mda.signal_batch.batch.processor: INFO
gc.mda.signal_batch.domain: INFO
gc.mda.signal_batch.monitoring: DEBUG
gc.mda.signal_batch.monitoring.controller: INFO
org.springframework.batch: INFO
org.springframework.jdbc: WARN
org.postgresql: WARN
com.zaxxer.hikari: WARN
com.zaxxer.hikari: INFO
# 개발 환경 배치 설정 (성능 최적화)
vessel: # spring 하위가 아닌 최상위 레벨
@ -173,7 +180,6 @@ vessel: # spring 하위가 아닌 최상위 레벨
# 궤적 비정상 검출 설정
track:
include-abnormal-in-tracks: true # 비정상 궤적도 정상 테이블+캐시에 포함 (강화학습 데이터 수집용)
abnormal-detection:
large-gap-threshold-hours: 4 # 이 시간 이상 gap은 연결 안함
extreme-speed-threshold: 1000 # 이 속도 이상은 무조건 비정상 (knots)
@ -205,9 +211,6 @@ vessel: # spring 하위가 아닌 최상위 레벨
max-size: 60000 # 최대 60,000척
refresh-interval-minutes: 2 # 2분치 데이터 조회 (수집 지연 고려)
# L2 HourlyTrackCache 간소화 (운영 환경 활성화)
hourly-simplification:
enabled: true # 운영 환경: 활성화
# 비정상 궤적 검출 설정 (개선됨)
abnormal-detection:
@ -261,10 +264,8 @@ vessel: # spring 하위가 아닌 최상위 레벨
retention-days: 60 # 구역별 선박 항적: 60일
t_grid_vessel_tracks:
retention-days: 30 # 해구별 선박 항적: 30일
t_vessel_tracks_daily:
retention-months: 0 # 일별 항적: 영구 보관
t_abnormal_tracks:
retention-months: 0 # 비정상 항적: 영구 보관
retention-months: 0 # 비정상 항적: 무한 보관
# S&P AIS API 캐시 TTL (운영: 120분)
app:
@ -272,29 +273,17 @@ app:
ais-target:
ttl-minutes: 120
ais-api:
username: 86b30c84-5d17-41ac-8c4f-2aa20d791114
password: KHZQVc2tMBGtNxvG
username: 7cc0517d-5ed6-452e-a06f-5bbfd6ab6ade
password: 2LLzSJNqtxWVD8zC
# 일일 항적 데이터 인메모리 캐시
cache:
daily-track:
enabled: true
retention-days: 14 # D-1 ~ D-14 (2주, DP 간소화로 메모리 절감)
max-memory-gb: 10 # 최대 10GB (DP 간소화 후 일 ~400MB × 14일 ≈ 6GB + 여유)
retention-days: 7 # D-1 ~ D-7 (오늘 제외)
max-memory-gb: 6 # 최대 6GB (일 평균 ~720MB × 7일 = ~5GB)
warmup-async: true # 비동기 워밍업 (서버 시작 차단 없음)
# 항적 데이터 메모리 예산 (64GB JVM 기준)
track:
memory-budget:
total-budget-gb: 64 # 전체 JVM 힙
cache-budget-gb: 35 # L1+L2+L3 캐시 (L3 5GB + L2 ~14GB + L1 ~3GB + 여유 13GB)
query-budget-gb: 20 # REST/WebSocket 동시 쿼리 (동시 60쿼리 × ~300MB)
max-single-query-gb: 5 # 단일 쿼리 상한
estimation-correction-factor: 1.8 # 실측 기반 보정 계수
queue-timeout-seconds: 60
warning-threshold: 0.8
critical-threshold: 0.95
# WebSocket 부하 제어 설정
websocket:
query:

파일 보기

@ -159,8 +159,6 @@ vessel:
page-size: ${BATCH_PAGE_SIZE:10000}
partition-size: ${BATCH_PARTITION_SIZE:24}
skip-limit: 100
track:
include-abnormal-in-tracks: false # true: 비정상 궤적도 정상 테이블+캐시에 포함 (강화학습 데이터 수집용)
retry-limit: 3
# Reader 설정
use-cursor-reader: true # Cursor Reader 사용 여부
@ -274,13 +272,6 @@ vessel:
ttl-minutes: 120 # 캐시 TTL: 120분 (위성 AIS 30~60분 간격 고려)
max-size: 100000 # 최대 선박 수: 100,000척 (2시간 누적 고려)
# L2 HourlyTrackCache 간소화 설정
hourly-simplification:
enabled: false # 기본값: 비활성화 (프로파일별로 활성화)
cron: "0 30 6,12,18 * * *" # 06:30, 12:30, 18:30 실행
hours-ago: 6 # 6시간 이상 경과 엔트리 대상
sample-rate: 2 # 매 2번째 포인트만 유지 (~50% 감소)
# ==================== S&P Global AIS API 설정 ====================
app:
ais-api:
@ -293,7 +284,7 @@ app:
cache:
ais-target:
ttl-minutes: 120 # 기본 TTL (프로파일별 오버라이드)
max-size: 500000 # 최대 캐시 크기 (50만 건)
max-size: 300000 # 최대 캐시 크기 (30만 건)
five-min-track:
ttl-minutes: 75 # TTL 75분 (1시간 + 15분 여유)
@ -310,18 +301,6 @@ app:
warmup-enabled: true
warmup-days: 7
# 항적 데이터 메모리 예산 (논리적 파티셔닝)
track:
memory-budget:
total-budget-gb: 64 # 전체 JVM 힙 예산
cache-budget-gb: 35 # L1/L2/L3 캐시 (55%)
query-budget-gb: 20 # REST/WebSocket 동시 쿼리 (31%)
max-single-query-gb: 5 # 단일 쿼리 상한
estimation-correction-factor: 1.8 # 메모리 추정 보정 계수
queue-timeout-seconds: 60 # 쿼리 대기 큐 타임아웃
warning-threshold: 0.8 # 예산 경고 임계값 (80%)
critical-threshold: 0.95 # 예산 위험 임계값 (95%)
# Swagger/OpenAPI 설정
springdoc:
api-docs:

파일 보기

@ -1,54 +0,0 @@
-- 쿼리 실행 메트릭 테이블
-- WebSocket/REST 쿼리의 성능 지표를 기록하여 ApiMetrics 페이지에서 조회
CREATE TABLE IF NOT EXISTS signal.t_query_metrics (
id BIGSERIAL PRIMARY KEY,
query_id VARCHAR(64) NOT NULL,
session_id VARCHAR(64),
query_type VARCHAR(20) NOT NULL, -- 'WEBSOCKET' | 'REST_V1' | 'REST_V2'
created_at TIMESTAMP NOT NULL DEFAULT now(),
-- 요청 파라미터
start_time TIMESTAMP,
end_time TIMESTAMP,
zoom_level INTEGER,
viewport_bounds VARCHAR(200), -- "minLon,minLat,maxLon,maxLat"
requested_mmsi INTEGER DEFAULT 0,
-- 처리 경로
data_path VARCHAR(10), -- 'CACHE' | 'DB' | 'HYBRID'
cache_hit_days INTEGER DEFAULT 0,
db_query_days INTEGER DEFAULT 0,
db_conn_total INTEGER DEFAULT 0,
-- 결과 통계
unique_vessels INTEGER DEFAULT 0,
total_tracks INTEGER DEFAULT 0,
total_points INTEGER DEFAULT 0,
points_after_simplify INTEGER DEFAULT 0,
total_chunks INTEGER DEFAULT 0,
response_bytes BIGINT DEFAULT 0,
-- 성능
elapsed_ms BIGINT DEFAULT 0,
db_query_ms BIGINT DEFAULT 0,
simplify_ms BIGINT DEFAULT 0,
backpressure_events INTEGER DEFAULT 0,
-- 결과 상태
status VARCHAR(20) DEFAULT 'COMPLETED' -- 'COMPLETED' | 'CANCELLED' | 'ERROR' | 'TIMEOUT'
);
-- client_ip 컬럼 추가 (idempotent)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'signal' AND table_name = 't_query_metrics' AND column_name = 'client_ip'
) THEN
ALTER TABLE signal.t_query_metrics ADD COLUMN client_ip VARCHAR(45);
END IF;
END $$;
CREATE INDEX IF NOT EXISTS idx_query_metrics_created ON signal.t_query_metrics(created_at);
CREATE INDEX IF NOT EXISTS idx_query_metrics_type ON signal.t_query_metrics(query_type, created_at);
CREATE INDEX IF NOT EXISTS idx_query_metrics_client_ip ON signal.t_query_metrics(client_ip, created_at);

파일 보기

@ -12,25 +12,6 @@ import static org.junit.jupiter.api.Assertions.*;
class SignalKindCodeTest {
@Nested
@DisplayName("shipName 기반 BUOY 검출")
class ShipNameBuoy {
@Test
@DisplayName("'.' 또는 '_' 2개 이상 → BUOY (vesselType 무시)")
void resolve_buoyByName() {
assertEquals("000028", SignalKindCode.resolve("Cargo", null, "BUOY_01_23").getCode());
assertEquals("000028", SignalKindCode.resolve("Tanker", null, "AIS.BUOY.01").getCode());
}
@Test
@DisplayName("'.' 또는 '_' 1개 이하 → vesselType 기준")
void resolve_notBuoyByName() {
assertEquals("000023", SignalKindCode.resolve("Cargo", null, "M.V CARGO").getCode());
assertEquals("000024", SignalKindCode.resolve("Tanker", null, "OIL_TANKER").getCode());
}
}
@Nested
@DisplayName("vesselType 단독 매칭")
class VesselTypeDirect {
@ -40,7 +21,7 @@ class SignalKindCodeTest {
"Cargo, 000023",
"Tanker, 000024",
"Passenger, 000022",
"AtoN, 000027",
"AtoN, 000028",
"Law Enforcement, 000025",
"Search and Rescue, 000021",
"Local Vessel, 000020"
@ -57,11 +38,11 @@ class SignalKindCodeTest {
@ParameterizedTest
@CsvSource({
"Tug, 000025",
"Pilot Boat, 000025",
"Tender, 000025",
"Anti Pollution, 000025",
"Medical Transport, 000025",
"Tug, 000027",
"Tender, 000027",
"High Speed Craft, 000022",
"Wing in Ground-effect, 000022"
})
@ -79,13 +60,13 @@ class SignalKindCodeTest {
@CsvSource({
"Vessel, Fishing, 000020",
"Vessel, Military Operations, 000025",
"Vessel, Towing, 000027",
"Vessel, Towing (Large), 000027",
"Vessel, Dredging/Underwater Ops, 000027",
"Vessel, Diving Operations, 000027",
"Vessel, Pleasure Craft, 000027",
"Vessel, Sailing, 000027",
"Vessel, N/A, 000027",
"Vessel, Towing, 000025",
"Vessel, Towing (Large), 000025",
"Vessel, Dredging/Underwater Ops, 000025",
"Vessel, Diving Operations, 000025",
"Vessel, Pleasure Craft, 000020",
"Vessel, Sailing, 000020",
"Vessel, N/A, 000020",
"Vessel, Hazardous Cat A, 000023",
"Vessel, Hazardous Cat B, 000023",
"Vessel, Unknown, 000027"

파일 보기

@ -14,34 +14,6 @@ import static org.assertj.core.api.Assertions.assertThat;
@DisplayName("SignalKindCode - MDA 선종 범례코드 치환")
class SignalKindCodeTest {
@Nested
@DisplayName("shipName 기반 BUOY 검출 (최우선)")
class ShipNameBuoy {
@ParameterizedTest(name = "shipName={0} → BUOY")
@ValueSource(strings = {"BUOY_01_23", "AIS.BUOY.01", "LIGHT__HOUSE", "A.B.C"})
@DisplayName("'.' 또는 '_' 2개 이상 → BUOY")
void resolve_buoyByName(String shipName) {
assertThat(SignalKindCode.resolve("Cargo", null, shipName))
.isEqualTo(SignalKindCode.BUOY);
}
@ParameterizedTest(name = "shipName={0} → vesselType 기준")
@ValueSource(strings = {"M.V CARGO", "SHIP_ONE", "NORMAL SHIP", "ABC"})
@DisplayName("'.' 또는 '_' 1개 이하 → shipName 무시, vesselType 기준")
void resolve_notBuoyByName(String shipName) {
assertThat(SignalKindCode.resolve("Cargo", null, shipName))
.isEqualTo(SignalKindCode.CARGO);
}
@Test
@DisplayName("shipName null → vesselType 기준")
void resolve_nullShipName() {
assertThat(SignalKindCode.resolve("Cargo", null, null))
.isEqualTo(SignalKindCode.CARGO);
}
}
@Nested
@DisplayName("vesselType 단독 매칭")
class VesselTypeDirect {
@ -51,6 +23,7 @@ class SignalKindCodeTest {
"cargo, CARGO",
"tanker, TANKER",
"passenger, FERRY",
"aton, BUOY",
"law enforcement, GOV",
"search and rescue, KCGV",
"local vessel, FISHING"
@ -60,12 +33,6 @@ class SignalKindCodeTest {
SignalKindCode result = SignalKindCode.resolve(vesselType, null);
assertThat(result.name()).isEqualTo(expectedName);
}
@Test
@DisplayName("aton → DEFAULT (부이가 아닌 일반 장비)")
void resolve_aton() {
assertThat(SignalKindCode.resolve("aton", null)).isEqualTo(SignalKindCode.DEFAULT);
}
}
@Nested
@ -73,19 +40,12 @@ class SignalKindCodeTest {
class VesselTypeGroup {
@ParameterizedTest(name = "vesselType={0} → GOV")
@ValueSource(strings = {"pilot boat", "anti pollution", "medical transport"})
@ValueSource(strings = {"tug", "pilot boat", "tender", "anti pollution", "medical transport"})
@DisplayName("GOV 그룹 매칭")
void resolve_govGroup(String vesselType) {
assertThat(SignalKindCode.resolve(vesselType, null)).isEqualTo(SignalKindCode.GOV);
}
@ParameterizedTest(name = "vesselType={0} → DEFAULT")
@ValueSource(strings = {"tug", "tender"})
@DisplayName("tug, tender → DEFAULT")
void resolve_tugTenderDefault(String vesselType) {
assertThat(SignalKindCode.resolve(vesselType, null)).isEqualTo(SignalKindCode.DEFAULT);
}
@ParameterizedTest(name = "vesselType={0} → FERRY")
@ValueSource(strings = {"high speed craft", "wing in ground-effect"})
@DisplayName("FERRY 그룹 매칭")
@ -110,18 +70,18 @@ class SignalKindCodeTest {
assertThat(SignalKindCode.resolve("Vessel", "Military Operations")).isEqualTo(SignalKindCode.GOV);
}
@ParameterizedTest(name = "Vessel + {0} → DEFAULT")
@ParameterizedTest(name = "Vessel + {0} → GOV")
@ValueSource(strings = {"towing", "towing (large)", "dredging/underwater ops", "diving operations"})
@DisplayName("Vessel + 해양작업 → DEFAULT")
@DisplayName("Vessel + 해양작업 → GOV")
void resolve_vesselMarineOps(String extraInfo) {
assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.DEFAULT);
assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.GOV);
}
@ParameterizedTest(name = "Vessel + {0} → DEFAULT")
@ParameterizedTest(name = "Vessel + {0} → FISHING")
@ValueSource(strings = {"pleasure craft", "sailing", "n/a"})
@DisplayName("Vessel + 레저/기타 → DEFAULT")
@DisplayName("Vessel + 레저/기타 → FISHING")
void resolve_vesselLeisure(String extraInfo) {
assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.DEFAULT);
assertThat(SignalKindCode.resolve("Vessel", extraInfo)).isEqualTo(SignalKindCode.FISHING);
}
@Test
@ -204,32 +164,4 @@ class SignalKindCodeTest {
assertThat(SignalKindCode.BUOY.getCode()).isEqualTo("000028");
}
}
@Nested
@DisplayName("shipName BUOY 판정 (resolve 3-param 통합 검증)")
class BuoyNamePattern {
@ParameterizedTest(name = "{0} → BUOY")
@ValueSource(strings = {"A.B.C", "BUOY_01_02", "._", "A.B_C"})
@DisplayName("2개 이상 특수문자 → BUOY")
void resolve_buoyPattern(String name) {
// vesselType과 무관하게 BUOY로 치환
assertThat(SignalKindCode.resolve(null, null, name)).isEqualTo(SignalKindCode.BUOY);
}
@ParameterizedTest(name = "{0} → not BUOY")
@ValueSource(strings = {"ABC", "A.B", "A_B", "NORMAL"})
@DisplayName("1개 이하 특수문자 → shipName 무시")
void resolve_notBuoyPattern(String name) {
assertThat(SignalKindCode.resolve(null, null, name)).isEqualTo(SignalKindCode.DEFAULT);
}
@Test
@DisplayName("null/blank shipName → vesselType 기준")
void resolve_nullBlankName() {
assertThat(SignalKindCode.resolve("Cargo", null, null)).isEqualTo(SignalKindCode.CARGO);
assertThat(SignalKindCode.resolve("Cargo", null, "")).isEqualTo(SignalKindCode.CARGO);
assertThat(SignalKindCode.resolve("Cargo", null, " ")).isEqualTo(SignalKindCode.CARGO);
}
}
}