diff --git a/.claude/commands/analyze-batch.md b/.claude/commands/analyze-batch.md
new file mode 100644
index 0000000..b14ae2e
--- /dev/null
+++ b/.claude/commands/analyze-batch.md
@@ -0,0 +1,70 @@
+# /analyze-batch - 배치 작업 분석
+
+Spring Batch 작업 관련 코드를 분석하고 진단합니다.
+
+## 분석 대상
+
+### 1. Job 구성 분석
+다음 파일들을 확인하세요:
+- `src/main/java/**/config/` - 배치 설정
+- `src/main/java/**/job/` - Job 정의
+- Job, Step, Reader, Processor, Writer 구성
+
+### 2. 스케줄링 설정
+- @Scheduled 어노테이션 사용 현황
+- Quartz 또는 다른 스케줄러 설정
+- Cron 표현식 분석
+
+### 3. 데이터 처리 패턴
+- ItemReader 구현 (DB, File, API 등)
+- ItemProcessor 로직
+- ItemWriter 구현 (bulk insert, 파일 출력 등)
+- Chunk 크기 설정
+
+### 4. 에러 처리
+- Skip 정책
+- Retry 정책
+- Listener 구현 (JobExecutionListener, StepExecutionListener)
+
+### 5. 성능 분석
+- Chunk 크기 적절성
+- 병렬 처리 설정 (Partitioning, Multi-threading)
+- 커넥션 풀 설정
+
+## 출력 형식
+
+```markdown
+## 배치 작업 분석 결과
+
+### Job 목록
+| Job 이름 | Step 수 | 스케줄 | 설명 |
+|----------|---------|--------|------|
+| xxxJob | 3 | 0 0 * * * | ... |
+
+### 데이터 흐름
+```
+[Reader] → [Processor] → [Writer]
+ ↓ ↓ ↓
+[데이터소스] [변환로직] [목적지]
+```
+
+### 에러 처리 설정
+- Skip 정책: [설정 내용]
+- Retry 정책: [설정 내용]
+
+### 성능 설정
+- Chunk 크기: [값]
+- 병렬 처리: [설정 여부]
+
+### 개선 제안
+1. [제안1]
+2. [제안2]
+```
+
+## 인자
+
+`$ARGUMENTS`: 특정 Job 이름이나 키워드
+
+예시:
+- `/analyze-batch` - 전체 분석
+- `/analyze-batch signal` - 신호 관련 배치만 분석
diff --git a/.claude/commands/build-check.md b/.claude/commands/build-check.md
new file mode 100644
index 0000000..0e02005
--- /dev/null
+++ b/.claude/commands/build-check.md
@@ -0,0 +1,64 @@
+# /build-check - 빌드 및 테스트 체크
+
+Maven 프로젝트의 빌드 상태와 테스트 결과를 점검합니다.
+
+## 실행 작업
+
+### 1. 컴파일 체크
+```bash
+mvn clean compile -DskipTests
+```
+- 컴파일 에러 확인
+- 의존성 문제 확인
+
+### 2. 테스트 실행 (선택적)
+```bash
+mvn test
+```
+- 단위 테스트 결과
+- 실패한 테스트 분석
+
+### 3. 패키지 빌드 (선택적)
+```bash
+mvn package -DskipTests
+```
+- JAR/WAR 생성 확인
+- 빌드 아티팩트 확인
+
+## 출력 형식
+
+```markdown
+## Build Check 결과
+
+### 컴파일
+- 상태: [성공/실패]
+- 에러 (있다면): [에러 내용]
+
+### 테스트
+- 상태: [성공/실패/스킵]
+- 통과: [N]개
+- 실패: [N]개
+- 실패한 테스트 (있다면):
+ - [테스트명]: [실패 원인]
+
+### 패키지
+- 상태: [성공/실패/스킵]
+- 아티팩트: [파일 경로]
+
+### 권장 조치
+1. [조치1]
+2. [조치2]
+```
+
+## 인자
+
+`$ARGUMENTS`: 옵션 지정
+- `compile` - 컴파일만
+- `test` - 컴파일 + 테스트
+- `package` - 전체 패키지 빌드
+- (없음) - 컴파일만 (기본값)
+
+예시:
+- `/build-check` - 컴파일 체크
+- `/build-check test` - 테스트 포함
+- `/build-check package` - 전체 빌드
diff --git a/.claude/commands/clarify.md b/.claude/commands/clarify.md
new file mode 100644
index 0000000..e7b0761
--- /dev/null
+++ b/.claude/commands/clarify.md
@@ -0,0 +1,66 @@
+# /clarify - 요구사항 명확화
+
+새로운 기능이나 버그 수정 요청 시 요구사항을 명확히 하기 위한 질문을 생성합니다.
+
+## 사용 시점
+
+- 사용자 요청이 모호할 때
+- 여러 구현 방법이 가능할 때
+- 비즈니스 요구사항 확인이 필요할 때
+
+## 질문 카테고리
+
+### 1. 기능 범위
+- 이 기능의 정확한 범위는 무엇인가요?
+- 어떤 서비스/컴포넌트가 이 기능을 사용하나요?
+- 기존 기능과의 관계는 어떻게 되나요?
+
+### 2. API 설계
+- REST API 엔드포인트 설계가 필요한가요?
+- 요청/응답 형식은 어떻게 되나요?
+- 기존 API 패턴을 따르나요?
+
+### 3. 데이터
+- 어떤 데이터가 필요한가요?
+- 데이터 소스는 무엇인가요? (DB, 외부 API, 파일)
+- 데이터 영속성이 필요한가요?
+
+### 4. 에러 처리
+- 예상되는 에러 케이스는 무엇인가요?
+- 에러 시 어떻게 처리해야 하나요? (재시도, 로깅, 알림)
+
+### 5. 성능
+- 예상 데이터 양은 얼마나 되나요?
+- 배치 처리가 필요한가요?
+- 성능 요구사항이 있나요?
+
+### 6. 배포/환경
+- 특정 환경(dev/qa/prod)에서만 동작해야 하나요?
+- 프로파일별 설정이 필요한가요?
+
+## 출력 형식
+
+```markdown
+## 요구사항 명확화 질문
+
+### 기능 범위
+1. [질문1]
+2. [질문2]
+
+### API 설계
+1. [질문1]
+
+### 데이터
+1. [질문1]
+
+...
+
+---
+답변을 바탕으로 구현 계획을 수립하겠습니다.
+```
+
+## 인자
+
+`$ARGUMENTS`: 사용자의 요청 내용을 요약해서 입력
+
+예: `/clarify 선박 위치 배치 저장 기능`
diff --git a/.claude/commands/perf-check.md b/.claude/commands/perf-check.md
new file mode 100644
index 0000000..a6a9eb0
--- /dev/null
+++ b/.claude/commands/perf-check.md
@@ -0,0 +1,72 @@
+# /perf-check - 성능 체크 명령어
+
+Spring Boot 배치 애플리케이션의 성능 관련 이슈를 점검합니다.
+
+## 분석 영역
+
+### 1. 데이터베이스 성능
+- JPA/MyBatis 쿼리 분석
+- N+1 문제 확인
+- 인덱스 활용 여부
+- Batch Insert/Update 적용 여부
+
+### 2. 메모리 관리
+- 대량 데이터 처리 시 메모리 사용
+- Stream 활용 여부
+- 페이징 처리 적용 여부
+
+### 3. 배치 처리
+- Chunk 크기 적절성
+- 병렬 처리 설정
+- Reader/Writer 최적화
+
+### 4. 커넥션 관리
+- 커넥션 풀 설정 (HikariCP)
+- 트랜잭션 범위 적절성
+- 커넥션 누수 가능성
+
+### 5. 외부 통신
+- HTTP Client 설정 (타임아웃, 커넥션 풀)
+- 재시도 정책
+- Circuit Breaker 패턴 적용
+
+## 출력 형식
+
+```markdown
+## 성능 체크 결과
+
+### 데이터베이스
+- [ ] N+1 문제: [발견 여부]
+- [ ] Batch 처리: [적용 현황]
+- [ ] 인덱스 활용: [상태]
+
+### 메모리
+- [ ] 대량 데이터 처리: [상태]
+- [ ] Stream 활용: [적용 여부]
+- [ ] 페이징: [적용 여부]
+
+### 배치 처리
+- [ ] Chunk 크기: [값 및 적절성]
+- [ ] 병렬 처리: [설정 상태]
+
+### 커넥션 관리
+- [ ] 풀 설정: [상태]
+- [ ] 트랜잭션 범위: [적절성]
+
+### 외부 통신
+- [ ] 타임아웃 설정: [상태]
+- [ ] 재시도 정책: [적용 여부]
+
+### 우선순위 개선 항목
+1. [항목1] - 예상 효과: [설명]
+2. [항목2] - 예상 효과: [설명]
+```
+
+## 인자
+
+`$ARGUMENTS`: 특정 영역만 체크 (db, memory, batch, connection, external)
+
+예시:
+- `/perf-check` - 전체 체크
+- `/perf-check db` - 데이터베이스만 체크
+- `/perf-check batch` - 배치 처리만 체크
diff --git a/.claude/commands/wrap.md b/.claude/commands/wrap.md
new file mode 100644
index 0000000..f516029
--- /dev/null
+++ b/.claude/commands/wrap.md
@@ -0,0 +1,65 @@
+# /wrap - Session Wrap-up Command
+
+세션 종료 시 다음 작업들을 병렬로 수행하는 명령어입니다.
+
+## 실행할 작업들 (병렬 에이전트)
+
+### 1. 문서 업데이트 체크
+다음 파일들의 업데이트 필요 여부를 확인하세요:
+- `CLAUDE.md`: 새로운 패턴이나 컨벤션이 발견되었는지
+- 이번 세션에서 중요한 기술 결정이 있었는지
+
+### 2. 반복 패턴 분석
+이번 세션에서 반복적으로 수행한 작업이 있는지 분석하세요:
+- 비슷한 코드 패턴을 여러 번 작성했는지
+- 동일한 명령어를 반복 실행했는지
+- 자동화할 수 있는 워크플로우가 있는지
+
+발견된 패턴은 `/commands`로 자동화를 제안하세요.
+
+### 3. 학습 내용 추출
+이번 세션에서 배운 내용을 정리하세요:
+- 새로 발견한 코드베이스의 특성
+- 해결한 문제와 그 해결 방법
+- 앞으로 주의해야 할 점
+
+### 4. 미완성 작업 정리
+완료하지 못한 작업이 있다면 정리하세요:
+- TODO 리스트에 남은 항목
+- 다음 세션에서 계속해야 할 작업
+- 블로커나 의존성 이슈
+
+### 5. 코드 품질 체크
+이번 세션에서 수정한 파일들에 대해:
+- 컴파일 에러가 없는지 (`mvn compile`)
+- 테스트가 통과하는지 (`mvn test`)
+
+## 출력 형식
+
+```markdown
+## Session Summary
+
+### 완료한 작업
+- [작업1]
+- [작업2]
+
+### 문서 업데이트 필요
+- [ ] CLAUDE.md: [업데이트 내용]
+
+### 발견된 패턴 (자동화 제안)
+- [패턴]: [자동화 방법]
+
+### 학습 내용
+- [내용1]
+- [내용2]
+
+### 미완성 작업
+- [ ] [작업1]
+- [ ] [작업2]
+
+### 코드 품질
+- Compile: [결과]
+- Test: [결과]
+```
+
+이 명령어를 실행할 때 Task 도구를 사용하여 여러 에이전트를 **병렬로** 실행하세요.
diff --git a/.claude/rules/code-style.md b/.claude/rules/code-style.md
index 0cb1563..f5f0203 100644
--- a/.claude/rules/code-style.md
+++ b/.claude/rules/code-style.md
@@ -44,7 +44,21 @@
- `@Builder` 허용
- `@Data` 사용 금지 (명시적으로 필요한 어노테이션만)
- `@AllArgsConstructor` 단독 사용 금지 (`@Builder`와 함께 사용)
-- `@Slf4j` 로거 사용
+
+## 로깅
+- `@Slf4j` (Lombok) 로거 사용
+- SLF4J `{}` 플레이스홀더에 printf 포맷 사용 금지 (`{:.1f}`, `{:d}`, `{%s}` 등)
+- 숫자 포맷이 필요하면 `String.format()`으로 변환 후 전달
+ ```java
+ // 잘못됨
+ log.info("처리율: {:.1f}%", rate);
+ // 올바름
+ log.info("처리율: {}%", String.format("%.1f", rate));
+ ```
+- 예외 로깅 시 예외 객체는 마지막 인자로 전달 (플레이스홀더 불필요)
+ ```java
+ log.error("처리 실패: {}", id, exception);
+ ```
## 예외 처리
- 비즈니스 예외는 커스텀 Exception 클래스 정의
diff --git a/.sdkmanrc b/.sdkmanrc
new file mode 100644
index 0000000..128dde5
--- /dev/null
+++ b/.sdkmanrc
@@ -0,0 +1,3 @@
+# Enable auto-env through SDKMAN config
+# Run 'sdk env' in this directory to switch versions
+java=17.0.18-amzn
diff --git a/docs/cache-benchmark-report.md b/docs/cache-benchmark-report.md
new file mode 100644
index 0000000..4986f92
--- /dev/null
+++ b/docs/cache-benchmark-report.md
@@ -0,0 +1,314 @@
+# 일일 캐시 성능 벤치마크 보고서
+
+## 선박 항적 리플레이 서비스 — 캐시 vs DB 정량 비교
+
+| 항목 | 내용 |
+|------|------|
+| 측정일 | 2026-02-07 |
+| 대상 시스템 | Signal Batch — ChunkedTrackStreamingService (WebSocket 스트리밍) |
+| 운영 환경 | prod 프로파일, Query DB 커넥션 풀 180 |
+| 캐시 구성 | DailyTrackCacheManager — D-1 ~ D-7 인메모리 캐시, STRtree 공간 인덱스 |
+| 측정 방식 | QueryBenchmark 내부 클래스 → `cache-benchmark.log` JSON 기록 |
+| 샘플 수 | 12건 (CACHE 3, DB 2, HYBRID 5, CACHE+Today 2) |
+
+---
+
+## 1. 측정 경로 분류
+
+쿼리 시간 범위에 따라 4가지 경로로 처리된다.
+
+| 경로 | 설명 | 데이터 소스 |
+|------|------|------------|
+| **CACHE** | 요청 일자 전체가 인메모리 캐시에 존재 | 메모리 |
+| **DB** | 캐시 미스 — Daily 테이블 직접 조회 | DB |
+| **HYBRID** | 캐시 히트 일자 + 캐시 범위 밖 일자 DB 조회 | 메모리 + DB |
+| **CACHE+Today** | 캐시 히트 + 오늘 데이터(Hourly/5min 테이블) | 메모리 + DB |
+
+### 오늘 데이터 구간 구조
+
+오늘(D-0) 데이터는 캐시 대상이 아니며, 시간 경과에 따라 두 테이블로 분할 조회된다.
+
+
+```
+ 오늘 00:00 ~ 12:00 12:00 ~ 12:35 현재(12:40)
+├──── Hourly 테이블 조회 ──────┤── 5min 조회 ──┤
+ (12개 범위, 1시간 단위) (7개 범위, 5분 단위)
+```
+
+- **Hourly**: 자정부터 약 1시간 전까지 → 시간 단위 범위 (약 12개)
+- **5min**: 최근 약 1시간 이내 → 5분 단위 범위 (약 7개)
+- 각 범위마다 DB 커넥션 1회 + Viewport Pass1 1회 발생 → 오늘 구간 커넥션 = 범위 수 × 2
+
+---
+
+## 2. 전체 측정 데이터
+
+### 2.1 요약 테이블
+
+| # | 경로 | Zoom | 일수 | 캐시/DB | 선박 수 | 트랙 수 | 응답시간(ms) | DB커넥션 | DB쿼리시간(ms) |
+|---|------|------|------|---------|---------|---------|-------------|----------|---------------|
+| 1 | CACHE | 10 | 3 | 3/0 | 443 | 986 | **575** | 3 | 0 |
+| 2 | DB | 10 | 2 | 0/2 | 352 | 587 | **7,221** | 8 | 3,475 |
+| 3 | DB | 10 | 2 | 0/2 | 12,253 | 18,502 | **8,195** | 19 | 1,443 |
+| 4 | CACHE | 10 | 2 | 2/0 | 10,690 | 16,942 | **1,439** | 2 | 0 |
+| 5 | CACHE | 10 | 2 | 2/0 | 10,690 | 16,942 | **1,374** | 2 | 0 |
+| 6 | HYBRID | 8 | 5 | 3/2 | 9,958 | 29,362 | **8,900** | 16 | 3,301 |
+| 7 | HYBRID | 9 | 5 | 3/2 | 547 | 1,927 | **1,373** | 11 | 550 |
+| 8 | HYBRID | 8 | 5 | 3/2 | 4,589 | 12,422 | **2,910** | 12 | 715 |
+| 9 | HYBRID | 8 | 5 | 3/2 | 5,760 | 23,283 | **3,651** | 15 | 1,048 |
+| 10 | CACHE+Today | 10 | 3+오늘 | 3/0 | 105 | 301 | **6,091** | 56 | 0 |
+| 11 | HYBRID | 8 | 5 | 3/2 | 52,151 | 162,849 | **105,212** | 45 | 93,319 |
+| 12 | CACHE+Today | 12 | 3+오늘 | 3/0 | 6,990 | 17,024 | **9,744** | 56 | 0 |
+
+### 2.2 DB 커넥션 세분화
+
+| # | 경로 | 합계 | Viewport Pass1 | Daily Pages | Hourly/5min | TableCheck |
+|---|------|------|----------------|-------------|-------------|------------|
+| 1 | CACHE | 3 | 0 | 0 | 0 | **3** |
+| 2 | DB | 8 | 2 | 2 | 0 | 2 |
+| 3 | DB | 19 | 2 | 2 | 0 | 2 |
+| 4 | CACHE | 2 | 0 | 0 | 0 | **2** |
+| 5 | CACHE | 2 | 0 | 0 | 0 | **2** |
+| 6 | HYBRID | 16 | 2 | 2 | 0 | 5 |
+| 7 | HYBRID | 11 | 2 | 2 | 0 | 5 |
+| 8 | HYBRID | 12 | 2 | 2 | 0 | 5 |
+| 9 | HYBRID | 15 | 2 | 2 | 0 | 5 |
+| 10 | CACHE+Today | 56 | **21** | 0 | **21** | **14** |
+| 11 | HYBRID | 45 | 2 | **6** | 0 | 5 |
+| 12 | CACHE+Today | 56 | **21** | 0 | **21** | **14** |
+
+> 합산 검증: 전 12건 모두 세분화 카운터 합 = 합계 일치 확인 (VesselInfo 카운터 포함, 표에서는 생략).
+
+**CACHE+Today (#10, #12) 커넥션 56건 내역**:
+- Hourly/5min 21건: 오늘 00:00~현재 구간 (Hourly 약 12건 + 5min 약 7건 + 폴백)
+- Viewport Pass1 21건: 동일 범위에 대한 뷰포트 교차 선박 수집 (범위당 1회)
+- TableCheck 14건: Daily 3건 + Hourly/5min 존재 확인 약 11건
+
+### 2.3 캐시 경로 간소화 지표
+
+캐시 경로에서는 원본 데이터를 메모리에 보유하므로 간소화 전/후를 측정할 수 있다.
+
+| # | 경로 | Zoom | 원본 포인트 | 간소화 후 | 압축률 | 간소화 시간(ms) | 배치 감소 |
+|---|------|------|------------|----------|--------|----------------|-----------|
+| 1 | CACHE | 10 | 1,083,566 | 11,212 | 99% | 133 | 50→3 (94%) |
+| 4 | CACHE | 10 | 13,502,970 | 172,066 | 99% | 1,075 | 602→10 (98%) |
+| 5 | CACHE | 10 | 13,502,970 | 172,066 | 99% | 981 | 602→10 (98%) |
+| 6 | HYBRID | 8 | 7,582,515 | 152,734 | 98% | 500 | 335→12 (96%) |
+| 7 | HYBRID | 9 | 1,049,434 | 11,634 | 99% | 74 | 50→5 (90%) |
+| 8 | HYBRID | 8 | 1,618,310 | 61,434 | 96% | 125 | 72→5 (93%) |
+| 9 | HYBRID | 8 | 3,202,500 | 155,633 | 95% | 277 | 137→12 (91%) |
+| 10 | CACHE+Today | 10 | 355,256 | 4,159 | 99% | 24 | 17→6 (65%) |
+| 11 | HYBRID | 8 | 41,634,918 | 732,470 | 98% | 2,411 | 1,813→42 (98%) |
+| 12 | CACHE+Today | 12 | 14,404,225 | 259,541 | 98% | 1,258 | 639→23 (96%) |
+
+> DB 경로(#2, #3)는 SQL 레벨에서 `ST_Simplify` 적용 후 수신하므로 앱 레벨 압축률 산출 불가 (before = after).
+
+---
+
+## 3. 경로별 정량 비교
+
+### 3.1 CACHE vs DB — 동일 규모 직접 비교
+
+#### 대규모: #4 CACHE vs #3 DB
+
+| 지표 | DB (#3) | CACHE (#4) | 개선 |
+|------|---------|------------|------|
+| 선박 수 | 12,253 | 10,690 | (유사 규모) |
+| **응답시간** | 8,195 ms | 1,439 ms | **5.7배 빨라짐** |
+| **DB 커넥션** | 19 | 2 | **89% 감소** |
+| DB 쿼리 시간 | 1,443 ms | 0 ms | **100% 절감** |
+| 배치 전송 수 | 11 | 10 | 유사 |
+
+#### 소규모: #2 DB vs #1 CACHE
+
+| 지표 | DB (#2) | CACHE (#1) | 개선 |
+|------|---------|------------|------|
+| 선박 수 | 352 | 443 | (유사 규모) |
+| **응답시간** | 7,221 ms | 575 ms | **12.6배 빨라짐** |
+| **DB 커넥션** | 8 | 3 | **63% 감소** |
+| DB 쿼리 시간 | 3,475 ms | 0 ms | **100% 절감** |
+| 배치 전송 수 | 2 | 3 | 유사 |
+
+### 3.2 HYBRID 경로 — 규모별 성능 변화
+
+5일 범위 쿼리 (캐시 3일 + DB 2일):
+
+| # | 선박 수 | 응답시간 | DB커넥션 | DB쿼리시간 |
+|---|---------|---------|----------|-----------|
+| 7 | 547 | 1,373 ms | 11 | 550 ms |
+| 8 | 4,589 | 2,910 ms | 12 | 715 ms |
+| 9 | 5,760 | 3,651 ms | 15 | 1,048 ms |
+| 6 | 9,958 | 8,900 ms | 16 | 3,301 ms |
+| 11 | 52,151 | 105,212 ms | 45 | 93,319 ms |
+
+- 소규모(~500척): 캐시 일자가 대부분의 처리를 흡수하여 **1.4초** 수준으로 응답.
+- 중규모(5K~10K척): DB 쿼리 부담 증가하나 캐시 일자가 완충하여 **3~9초** 수준.
+- 대규모(52K척): 캐시 미스 일자의 데이터량이 크면 DB 의존도가 높아져 **100초+** 수준.
+- 캐시 적용 일수가 많을수록(현재 3/5일 = 60%) HYBRID 경로의 DB 부담이 경감된다.
+
+### 3.3 CACHE+Today 경로 — 오늘 데이터 포함 쿼리
+
+| # | Zoom | 선박 수 | 응답시간 | DB커넥션 | 오늘 구간 커넥션 |
+|---|------|---------|---------|----------|----------------|
+| 10 | 10 | 105 | 6,091 ms | 56 | 42 (H5m 21 + VP 21) |
+| 12 | 12 | 6,990 | 9,744 ms | 56 | 42 (H5m 21 + VP 21) |
+
+**핵심 발견**:
+- 두 쿼리 모두 동일한 시간 범위(3일+오늘)이므로 커넥션 구조가 동일하며, 뷰포트 크기만 다름.
+- 오늘 구간(00:00~현재)만으로 **42건의 DB 커넥션**이 발생하여, 순수 CACHE 경로(2~3건)와 큰 차이를 보인다.
+- 선박 수가 적은 #10(105척)도 6초가 소요되며, 이는 오늘 구간의 범위별 개별 커넥션 오버헤드가 원인이다.
+
+### 3.4 줌 레벨별 간소화 효과
+
+| Zoom | 대표 # | 원본 포인트 | 간소화 후 | 압축률 | 선박당 평균 포인트 |
+|------|--------|------------|----------|--------|------------------|
+| 8 | #6 | 7,582,515 | 152,734 | 98% | 15.3 |
+| 9 | #7 | 1,049,434 | 11,634 | 99% | 21.3 |
+| 10 | #4 | 13,502,970 | 172,066 | 99% | 16.1 |
+| 12 | #12 | 14,404,225 | 259,541 | 98% | 37.1 |
+
+- 줌 8~10: 선박당 15~21 포인트로 압축 — 해역 수준 조회에 최적.
+- 줌 12: 선박당 37 포인트 — 항만 수준 상세 조회에서 더 많은 포인트를 유지.
+- 전 줌 레벨에서 95~99% 압축률 달성.
+
+---
+
+## 4. DB 커넥션 구성 분석
+
+### 4.1 경로별 커넥션 구성 패턴
+
+```
+CACHE (순수) [==TC==] 2~3건
+ TableCheck만 발생
+
+DB (순수) [VP][DA][..기타..][TC] 8~19건
+ 각 항목 균등 분포
+
+HYBRID [VP][DA][..기타..........][TC---] 11~45건
+ 규모에 비례 증가
+
+CACHE+Today [VP----------][H5m---------][TC------] 56건
+ 오늘 구간의 Hourly/5min + Viewport가 대부분
+```
+
+### 4.2 커넥션 풀 영향 분석
+
+Query DataSource 커넥션 풀 180 기준:
+
+| 경로 | 쿼리당 사용 | 동시 10쿼리 시 누적 | 풀 압박 수준 |
+|------|------------|-------------------|------------|
+| CACHE | 2~3 | 30 | 매우 낮음 (17%) |
+| HYBRID (소규모) | 11~15 | 150 | 보통 (83%) |
+| DB | 8~19 | 190 | 보통~높음 |
+| CACHE+Today | 56 | 560 | 높음 |
+
+> 커넥션은 순간 점유가 아닌 순차 사용이므로 실제 동시 점유 수는 위 수치보다 작다. 캐시 적용으로 전체 쿼리 중 CACHE 경로 비율이 높아지면 풀 전체 부담이 크게 감소한다.
+
+---
+
+## 5. 종합 성능 비교
+
+### 5.1 핵심 개선 지표
+
+| 지표 | DB 경로 | CACHE 경로 | 개선율 |
+|------|---------|------------|--------|
+| 응답시간 (대규모, 만 척 이상) | 8,195 ms | 1,439 ms | **5.7배** |
+| 응답시간 (소규모, 수백 척) | 7,221 ms | 575 ms | **12.6배** |
+| DB 커넥션 수 (대규모) | 19건 | 2건 | **89% 감소** |
+| DB 커넥션 수 (소규모) | 8건 | 3건 | **63% 감소** |
+| DB 쿼리 시간 | 1,443~3,475 ms | 0 ms | **100% 절감** |
+| 포인트 간소화 | SQL ST_Simplify | 앱 레벨 95~99% | 캐시만 측정 가능 |
+
+### 5.2 경로별 응답시간 분포
+
+```
+ 응답시간 (ms, 로그 스케일 아님)
+경로 0 2,000 4,000 6,000 8,000 10,000
+CACHE (순수) |█| 575~1,439
+HYBRID (소규모) |██| 1,373
+HYBRID (중규모) |█████| 2,910~3,651
+CACHE+Today |████████████| 6,091~9,744
+DB (순수) |████████████████| 7,221~8,195
+HYBRID (대규모) |██████████████████| 8,900
+```
+
+> HYBRID 대규모(#11, 52K척, 105초)는 스케일 초과로 표시 생략.
+
+### 5.3 캐시 적용에 따른 운영 시나리오별 예측
+
+D-1 ~ D-7 캐시가 적용된 상태에서:
+
+| 사용 패턴 | 예상 경로 | 예상 응답시간 | DB 커넥션 |
+|----------|----------|-------------|----------|
+| 과거 1~7일만 조회 | CACHE | **0.5~1.5초** | 2~3건 |
+| 과거 수일 + 오늘 | CACHE+Today | 6~10초 | ~56건 |
+| 7일 이전 과거 포함 | HYBRID / DB | 1~9초 (규모 의존) | 8~45건 |
+
+---
+
+## 6. 캐시 범위 확장 시 권장 구성
+
+현재 D-1 ~ D-7 캐시 구성에서 조회 기간 범위를 확장하고자 할 경우, 아래 구성을 권장한다.
+
+### 6.1 현재 구성
+
+```yaml
+cache:
+ daily-track:
+ enabled: true
+ retention-days: 7 # D-1 ~ D-7 캐시
+ max-memory-gb: 6 # 최대 메모리 사용량
+ warmup-async: true # 비동기 워밍업
+```
+
+- 7일 이내 과거 조회: CACHE 경로 (0.5~1.5초)
+- 7일 초과 과거 포함: HYBRID/DB 경로로 폴백
+
+### 6.2 확장 권장안
+
+| 시나리오 | retention-days | max-memory-gb | 예상 효과 |
+|----------|---------------|---------------|----------|
+| **현재** | 7 | 6 | 1주일 이내 CACHE, 이후 DB |
+| **2주 확장** | 14 | 12 | 2주 리플레이까지 CACHE 커버 |
+| **1개월 확장** | 30 | 25 | 월간 분석 조회까지 CACHE 커버 |
+
+**확장 시 고려사항**:
+
+1. **메모리 산정**: 현재 7일 캐시 ≈ 4GB 기준, 선형 증가 추정.
+ - 14일: ~12GB, 30일: ~25GB
+ - 서버 가용 메모리와 JVM 힙 설정(`-Xmx`) 여유 확인 필요.
+
+2. **워밍업 시간**: retention-days 증가에 비례하여 초기 로드 시간 증가.
+ - 7일: 약 1~2분, 14일: 약 2~4분, 30일: 약 5~10분 (비동기이므로 서비스 가용성 영향 없음)
+
+3. **HYBRID 비율 감소**: retention-days 확장 시 DB 폴백 빈도가 줄어, HYBRID 경로가 줄고 순수 CACHE 경로 비율이 증가한다. 이는 DB 커넥션 풀 부담 경감에 직접 기여한다.
+
+4. **CACHE+Today 경로는 retention-days와 무관**: 오늘(D-0) 데이터는 항상 Hourly/5min 테이블에서 DB 조회한다. 이 구간의 커넥션 최적화는 별도 과제이다.
+
+### 6.3 단계적 확장 전략
+
+```
+Phase 1 (현재) : retention-days=7, max-memory-gb=6 → 1주 커버
+Phase 2 (권장) : retention-days=14, max-memory-gb=12 → 2주 커버, 주간 비교 분석 지원
+Phase 3 (선택) : retention-days=30, max-memory-gb=25 → 월간 커버, 장기 항적 분석 지원
+```
+
+각 단계 전환 시 서버 메모리 여유와 워밍업 시간을 모니터링하며, JVM 힙 설정을 함께 조정한다.
+
+---
+
+## 7. 결론
+
+### 7.1 캐시 효과 확인
+
+1. **응답시간**: 순수 CACHE 경로에서 DB 대비 **5.7~12.6배** 빨라짐 확인.
+2. **DB 커넥션**: 순수 CACHE 경로에서 DB 대비 **63~89%** 감소 확인.
+3. **간소화**: 캐시 경로에서 줌 레벨에 따라 **95~99%** 포인트 압축, 배치 전송 수 **90~98%** 감소.
+4. **DB 쿼리 시간**: CACHE 경로에서 **0ms** — DB 부하 완전 제거.
+
+### 7.2 운영 권장사항
+
+| 항목 | 현황 | 권장 방향 |
+|------|------|----------|
+| 캐시 보존 기간 | 7일 | 사용 패턴에 따라 14~30일로 확장 검토 |
+| CACHE+Today 커넥션 | 오늘 구간 범위별 개별 DB 커넥션 (56건) | 오늘 데이터 범위 병합 또는 별도 캐시 검토 |
\ No newline at end of file
diff --git a/docs/cache-benchmark-summary.md b/docs/cache-benchmark-summary.md
new file mode 100644
index 0000000..393de8c
--- /dev/null
+++ b/docs/cache-benchmark-summary.md
@@ -0,0 +1,102 @@
+# 일일 캐시 성능 개선 요약보고서
+
+| 항목 | 내용 |
+|------|------|
+| 측정일 | 2026-02-07 |
+| 대상 | 선박 항적 리플레이 서비스 (WebSocket 스트리밍) |
+| 개선 내용 | 일일(Daily) 집계 데이터 7일분 인메모리 캐시 적용 |
+| 측정 건수 | 12건 (CACHE 3, DB 2, HYBRID 5, CACHE+Today 2) |
+
+---
+
+## 1. 핵심 성능 개선 지표
+
+| 지표 | DB 경로 (개선 전) | CACHE 경로 (개선 후) | 개선율 |
+|------|-------------------|---------------------|--------|
+| **응답시간** (만 척 이상) | 8.2초 | 1.4초 | **5.7배 단축** |
+| **응답시간** (수백 척) | 7.2초 | 0.6초 | **12.6배 단축** |
+| **DB 커넥션** (만 척 이상) | 19건 | 2건 | **89% 감소** |
+| **DB 커넥션** (수백 척) | 8건 | 3건 | **63% 감소** |
+| **DB 쿼리 시간** | 1.4 ~ 3.5초 | 0초 | **100% 절감** |
+| **포인트 압축률** | SQL 처리 | 앱 레벨 95 ~ 99% | 동등 품질 유지 |
+
+---
+
+## 2. 경로별 응답시간 비교
+
+```
+경로 응답시간
+CACHE (순수) ██ 0.6 ~ 1.4초
+HYBRID (소규모) ██ 1.4초
+HYBRID (중규모) █████ 2.9 ~ 3.7초
+CACHE+Today ████████████ 6.1 ~ 9.7초
+DB (순수) ████████████████ 7.2 ~ 8.2초
+```
+
+- **CACHE**: 캐시 범위 내 과거 데이터만 조회 시, 가장 빠른 응답
+- **HYBRID**: 캐시 + DB 병합 — 캐시 비율이 높을수록 DB 부담 경감
+- **CACHE+Today**: 오늘 데이터 포함 시, Hourly/5min 테이블 개별 조회로 커넥션 다수 발생
+
+---
+
+## 3. DB 커넥션 풀 부담 변화
+
+Query DataSource 커넥션 풀 180 기준:
+
+| 경로 | 쿼리당 커넥션 | 동시 10쿼리 | 풀 사용률 |
+|------|-------------|------------|----------|
+| CACHE | 2 ~ 3 | ~30 | **17%** (여유) |
+| HYBRID (소규모) | 11 ~ 15 | ~150 | 83% |
+| DB | 8 ~ 19 | ~190 | 100%+ |
+
+> 캐시 적용으로 전체 쿼리 중 CACHE 경로 비율이 높아지면, DB 커넥션 풀 전체 부담이 크게 감소한다.
+
+---
+
+## 4. 간소화 파이프라인 효과
+
+캐시 경로에서 원본 데이터 → 3단계 간소화(Douglas-Peucker + 거리/시간 샘플링 + 줌 레벨 샘플링) 적용:
+
+| 줌 레벨 | 원본 포인트 | 간소화 후 | 압축률 | 선박당 평균 |
+|---------|------------|----------|--------|-----------|
+| 8 | 7.6M | 153K | 98% | 15 포인트 |
+| 9 | 1.0M | 12K | 99% | 21 포인트 |
+| 10 | 13.5M | 172K | 99% | 16 포인트 |
+| 12 | 14.4M | 260K | 98% | 37 포인트 |
+
+- 간소화 CPU 시간: 24ms ~ 1,258ms (DB 대기 없이 순수 CPU 연산)
+- 전 줌 레벨에서 95 ~ 99% 데이터 압축 달성
+
+---
+
+## 5. 운영 시나리오별 예상 성능
+
+| 사용 패턴 | 예상 경로 | 예상 응답시간 | DB 커넥션 |
+|----------|----------|-------------|----------|
+| 과거 1~7일만 조회 | CACHE | **0.6 ~ 1.4초** | 2~3건 |
+| 과거 수일 + 오늘 | CACHE+Today | 6 ~ 10초 | ~56건 |
+| 7일 이전 과거 포함 | HYBRID / DB | 1 ~ 9초 (규모 의존) | 8~45건 |
+
+---
+
+## 6. 향후 확장 권장안
+
+| 시나리오 | 캐시 보존 기간 | 메모리 | 효과 |
+|----------|---------------|--------|------|
+| 현재 | 7일 | 6GB | 1주 이내 CACHE 경로 |
+| 2주 확장 | 14일 | 12GB | 주간 비교 분석 지원 |
+| 1개월 확장 | 30일 | 25GB | 월간 항적 분석 지원 |
+
+> 캐시 보존 기간 확장 시 HYBRID 경로 비율이 줄고 순수 CACHE 비율 증가 → DB 부담 추가 경감
+
+---
+
+## 7. 결론
+
+| 항목 | 효과 |
+|------|------|
+| 응답 속도 | DB 대비 **5.7 ~ 12.6배** 단축 |
+| DB 부하 | 커넥션 **63 ~ 89%** 감소, 쿼리 시간 **100%** 절감 |
+| 데이터 품질 | 줌 레벨별 95 ~ 99% 압축, DB 경로와 동등 품질 |
+| 동시 사용자 수용 | DB 커넥션 경합 해소로 동시 처리 가능 수 증가 |
+| 확장성 | 캐시 보존 기간 확장으로 추가 개선 가능 |
diff --git a/docs/일일 캐시 성능 벤치마크 보고서.docx b/docs/일일 캐시 성능 벤치마크 보고서.docx
new file mode 100644
index 0000000..bcfb823
Binary files /dev/null and b/docs/일일 캐시 성능 벤치마크 보고서.docx differ
diff --git a/docs/일일 캐시 성능 벤치마크 요약 보고서.docx b/docs/일일 캐시 성능 벤치마크 요약 보고서.docx
new file mode 100644
index 0000000..27ec876
Binary files /dev/null and b/docs/일일 캐시 성능 벤치마크 요약 보고서.docx differ
diff --git a/docs/항적조회,리플레이 성능 부하 개선 결과보고서.docx b/docs/항적조회,리플레이 성능 부하 개선 결과보고서.docx
new file mode 100644
index 0000000..03c288c
Binary files /dev/null and b/docs/항적조회,리플레이 성능 부하 개선 결과보고서.docx differ
diff --git a/pom.xml b/pom.xml
index 38b07b6..269c946 100644
--- a/pom.xml
+++ b/pom.xml
@@ -77,6 +77,12 @@
spring-boot-starter-aop
+
+
+ org.springframework.boot
+ spring-boot-starter-webflux
+
+
org.springframework.boot
spring-boot-starter-cache
diff --git a/scripts/deploy-only.bat b/scripts/deploy-only.bat
new file mode 100644
index 0000000..f15a228
--- /dev/null
+++ b/scripts/deploy-only.bat
@@ -0,0 +1,219 @@
+@echo off
+chcp 65001 >nul
+REM ===============================================
+REM Signal Batch Deploy Only Script
+REM (Build with IntelliJ UI first)
+REM ===============================================
+
+setlocal enabledelayedexpansion
+
+REM Configuration
+set "SERVER_IP=10.26.252.51"
+set "SERVER_USER=root"
+set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
+set "JAR_NAME=vessel-batch-aggregation.jar"
+set "BACKUP_DIR=!SERVER_PATH!/backups"
+
+echo ===============================================
+echo Signal Batch Deploy System (Deploy Only)
+echo ===============================================
+echo [INFO] Deploy Start: !date! !time!
+echo [INFO] Target Server: !SERVER_IP!
+echo.
+
+REM 1. Set correct working directory and check JAR file
+echo =============== Working Directory Setup ===============
+echo [INFO] Current directory: !CD!
+echo [INFO] Script directory: %~dp0
+
+REM Change to project root directory (parent of scripts)
+cd /d "%~dp0.."
+echo [INFO] Project root directory: !CD!
+
+echo.
+echo =============== JAR File Check ===============
+set "JAR_PATH=target\!JAR_NAME!"
+
+if not exist "!JAR_PATH!" (
+ echo [ERROR] JAR file not found: !JAR_PATH!
+ echo [INFO] Current directory: !CD!
+ echo.
+ echo Please build the project first using IntelliJ IDEA:
+ echo 1. Open Maven tool window: View ^> Tool Windows ^> Maven
+ echo 2. Double-click: Lifecycle ^> clean
+ echo 3. Double-click: Lifecycle ^> package
+ echo 4. Verify target/!JAR_NAME! exists
+ echo.
+ echo Checking for any JAR files in target directory:
+ if exist "target\" (
+ dir target\*.jar 2>nul
+ if !ERRORLEVEL! neq 0 (
+ echo [INFO] Target directory exists but no JAR files found
+ )
+ ) else (
+ echo [INFO] Target directory does not exist - project not built yet
+ )
+ pause
+ exit /b 1
+)
+
+for %%I in ("!JAR_PATH!") do (
+ echo [INFO] JAR File: %%~nxI
+ echo [INFO] File Size: %%~zI bytes
+ echo [INFO] Modified: %%~tI
+)
+
+echo [SUCCESS] JAR file ready for deployment
+
+REM 2. SSH Connection Test
+echo.
+echo =============== SSH Connection Test ===============
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "echo 'SSH connection OK'" 2>nul
+set CONNECTION_RESULT=!ERRORLEVEL!
+if !CONNECTION_RESULT! neq 0 (
+ echo [ERROR] SSH connection failed
+ echo [INFO] Please check:
+ echo - SSH key authentication setup
+ echo - Network connectivity to !SERVER_IP!
+ echo - Server is accessible
+ echo.
+ echo Run setup-ssh-key.bat to configure SSH keys
+ pause
+ exit /b 1
+)
+echo [SUCCESS] SSH connection successful
+
+REM 3. Check current server status
+echo.
+echo =============== Current Server Status ===============
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status" 2>nul
+set SERVER_RUNNING=!ERRORLEVEL!
+
+REM 4. Create backup
+echo.
+echo =============== Create Backup ===============
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "mkdir -p !BACKUP_DIR!"
+
+REM Generate backup timestamp
+for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do if not "%%I"=="" set DATETIME=%%I
+set BACKUP_TIMESTAMP=!DATETIME:~0,8!_!DATETIME:~8,6!
+
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "if [ -f !SERVER_PATH!/!JAR_NAME! ]; then echo '[INFO] Creating backup...'; cp !SERVER_PATH!/!JAR_NAME! !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!; echo '[INFO] Backup created: !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!'; ls -la !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!; else echo '[INFO] No existing JAR file to backup (first deployment)'; fi"
+
+REM 5. Stop application
+if !SERVER_RUNNING! equ 0 (
+ echo.
+ echo =============== Stop Application ===============
+ echo [INFO] Stopping running application...
+ ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh stop"
+ if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Failed to stop application
+ exit /b 1
+ )
+ echo [SUCCESS] Application stopped
+) else (
+ echo.
+ echo [INFO] Application not running, proceeding with deployment
+)
+
+REM 6. Deploy new JAR
+echo.
+echo =============== Deploy New JAR ===============
+echo [INFO] Transferring JAR file...
+scp "!JAR_PATH!" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] File transfer failed
+ goto :rollback_option
+)
+
+echo [INFO] Setting permissions...
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "chmod 644 !SERVER_PATH!/!JAR_NAME!"
+
+echo [SUCCESS] JAR file deployed
+
+REM 7. Transfer version info (if exists)
+echo.
+echo =============== Version Information ===============
+if exist "target\version.txt" (
+ echo [INFO] Transferring version information...
+ scp "target\version.txt" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
+) else (
+ echo [INFO] No version file found, creating basic version info...
+ ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "echo 'DEPLOY_TIME=!date! !time!' > !SERVER_PATH!/version.txt"
+)
+
+REM 8. Start application
+echo.
+echo =============== Start Application ===============
+echo [INFO] Starting application...
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh start"
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Failed to start application
+ goto :rollback_option
+)
+
+REM 9. Wait and verify
+echo.
+echo =============== Deployment Verification ===============
+echo [INFO] Waiting for application startup (30 seconds)...
+timeout /t 30 /nobreak > nul
+
+echo [INFO] Checking application status...
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Application not running properly
+ goto :rollback_option
+)
+
+echo [INFO] Performing health check...
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "curl -f http://localhost:8090/actuator/health --max-time 10" 2>nul
+if !ERRORLEVEL! neq 0 (
+ echo [WARN] Health check failed, but application appears to be running
+ echo [INFO] Give it a few more minutes to fully start up
+)
+
+REM 10. Cleanup old backups
+echo.
+echo =============== Cleanup ===============
+echo [INFO] Cleaning up old backups (keeping recent 7)...
+ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !BACKUP_DIR!; ls -t !JAR_NAME!.backup.* 2>/dev/null | tail -n +8 | xargs rm -f 2>/dev/null || true; echo '[INFO] Backup cleanup completed'"
+
+REM 11. Success
+echo.
+echo =============== Deployment Successful ===============
+echo [SUCCESS] Deployment completed successfully!
+echo [INFO] Deployment time: !date! !time!
+echo [INFO] Backup created: !JAR_NAME!.backup.!BACKUP_TIMESTAMP!
+echo [INFO] Server dashboard: http://!SERVER_IP!:8090/static/admin/batch-admin.html
+echo [INFO] Server logs: ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh logs"
+echo.
+echo Quick commands:
+echo server-status.bat - Check server status
+echo server-logs.bat tail - Monitor logs
+echo rollback.bat !BACKUP_TIMESTAMP! - Rollback if needed
+
+goto :end
+
+:rollback_option
+echo.
+echo =============== Deployment Failed ===============
+echo [ERROR] Deployment failed!
+echo.
+set /p ROLLBACK="Attempt rollback to previous version? (y/N): "
+if /i "!ROLLBACK!"=="y" (
+ echo [INFO] Attempting rollback...
+ if defined BACKUP_TIMESTAMP (
+ call rollback.bat !BACKUP_TIMESTAMP!
+ ) else (
+ echo [ERROR] No backup timestamp available for rollback
+ echo [INFO] Manual recovery may be required
+ )
+) else (
+ echo [INFO] Manual recovery required
+ echo [INFO] SSH to server: ssh !SERVER_USER!@!SERVER_IP!
+ echo [INFO] Check status: cd !SERVER_PATH! && ./vessel-batch-control.sh status
+)
+exit /b 1
+
+:end
+endlocal
\ No newline at end of file
diff --git a/scripts/deploy-query-server.bat b/scripts/deploy-query-server.bat
new file mode 100644
index 0000000..320e63e
--- /dev/null
+++ b/scripts/deploy-query-server.bat
@@ -0,0 +1,47 @@
+@echo off
+REM ====================================
+REM 조회 전용 서버 배포 스크립트 (10.29.17.90)
+REM ====================================
+
+echo ======================================
+echo Query-Only Server Deployment Script
+echo Target: 10.29.17.90
+echo Profile: query
+echo ======================================
+
+REM 프로젝트 루트 디렉토리로 이동
+cd /d %~dp0\..
+
+REM 빌드
+echo.
+echo [1/3] Building project...
+call mvn clean package -DskipTests
+
+if %ERRORLEVEL% NEQ 0 (
+ echo Build failed!
+ pause
+ exit /b 1
+)
+
+echo.
+echo [2/3] Stopping existing application...
+REM SSH를 통해 원격 서버의 기존 프로세스 종료
+ssh mpc@10.29.17.90 "pkill -f 'signal_batch.*query' || true"
+
+echo.
+echo [3/3] Deploying and starting application...
+REM JAR 파일 복사
+scp target\signal_batch-0.0.1-SNAPSHOT.jar mpc@10.29.17.90:/home/mpc/app/
+
+REM 원격 서버에서 애플리케이션 시작 (query 프로파일)
+ssh mpc@10.29.17.90 "cd /home/mpc/app && nohup java -jar signal_batch-0.0.1-SNAPSHOT.jar --spring.profiles.active=query > query-server.log 2>&1 &"
+
+echo.
+echo ======================================
+echo Deployment completed!
+echo Server: 10.29.17.90
+echo Profile: query
+echo Log: /home/mpc/app/query-server.log
+echo ======================================
+
+pause
diff --git a/scripts/deploy-safe.bat b/scripts/deploy-safe.bat
new file mode 100644
index 0000000..e25cb08
--- /dev/null
+++ b/scripts/deploy-safe.bat
@@ -0,0 +1,237 @@
+@echo off
+chcp 65001 >nul
+REM ===============================================
+REM Signal Batch Safe Deploy Script
+REM (with running application check)
+REM ===============================================
+
+setlocal enabledelayedexpansion
+
+REM Configuration
+set "SERVER_IP=10.26.252.48"
+set "SERVER_USER=root"
+set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
+set "JAR_NAME=vessel-batch-aggregation.jar"
+set "BACKUP_DIR=!SERVER_PATH!/backups"
+
+echo ===============================================
+echo Signal Batch Safe Deploy System
+echo ===============================================
+echo [INFO] Deploy Start: !date! !time!
+echo [INFO] Target Server: !SERVER_IP!
+echo.
+
+REM Set working directory
+cd /d "%~dp0.."
+echo [INFO] Project directory: !CD!
+
+REM 1. Check JAR file
+echo.
+echo =============== JAR File Check ===============
+set "JAR_PATH=target\!JAR_NAME!"
+
+if not exist "!JAR_PATH!" (
+ echo [ERROR] JAR file not found: !JAR_PATH!
+ echo [INFO] Please build the project first using IntelliJ Maven
+ pause
+ exit /b 1
+)
+
+for %%I in ("!JAR_PATH!") do (
+ echo [INFO] JAR File: %%~nxI
+ echo [INFO] File Size: %%~zI bytes
+ echo [INFO] Modified: %%~tI
+)
+
+REM 2. SSH Connection Test
+echo.
+echo =============== SSH Connection Test ===============
+ssh !SERVER_USER!@!SERVER_IP! "echo 'SSH connection OK'" 2>nul
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] SSH connection failed
+ pause
+ exit /b 1
+)
+echo [SUCCESS] SSH connection successful
+
+REM 3. Check current application status
+echo.
+echo =============== Current Application Status ===============
+echo [INFO] Checking if application is currently running...
+
+ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status" 2>nul
+set APP_STATUS=!ERRORLEVEL!
+
+if !APP_STATUS! equ 0 (
+ echo.
+ echo [WARNING] Application is currently RUNNING on the server!
+ echo.
+ echo =============== Deployment Options ===============
+ echo 1. Continue with deployment (stop → deploy → start)
+ echo 2. Cancel deployment (keep current version running)
+ echo 3. Check application details first
+ echo.
+ set /p DEPLOY_CHOICE="Choose option (1-3): "
+
+ if "!DEPLOY_CHOICE!"=="2" (
+ echo [INFO] Deployment cancelled by user
+ echo [INFO] Current application continues running
+ pause
+ exit /b 0
+ )
+
+ if "!DEPLOY_CHOICE!"=="3" (
+ echo.
+ echo =============== Application Details ===============
+ ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
+ echo.
+ ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/health --max-time 5 2>/dev/null | python -m json.tool 2>/dev/null || echo 'Health endpoint not available'"
+ echo.
+ set /p FINAL_CHOICE="Proceed with deployment? (y/N): "
+ if /i not "!FINAL_CHOICE!"=="y" (
+ echo [INFO] Deployment cancelled
+ pause
+ exit /b 0
+ )
+ )
+
+ if not "!DEPLOY_CHOICE!"=="1" if not "!DEPLOY_CHOICE!"=="3" (
+ echo [ERROR] Invalid choice. Deployment cancelled.
+ pause
+ exit /b 1
+ )
+
+ echo.
+ echo [INFO] Proceeding with deployment...
+ echo [INFO] Current application will be stopped during deployment
+
+) else (
+ echo [INFO] Application is not currently running
+ echo [INFO] Proceeding with fresh deployment
+)
+
+REM 4. Create backup timestamp
+for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do if not "%%I"=="" set DATETIME=%%I
+set BACKUP_TIMESTAMP=!DATETIME:~0,8!_!DATETIME:~8,6!
+
+REM 5. Create backup (if existing JAR exists)
+echo.
+echo =============== Create Backup ===============
+ssh !SERVER_USER!@!SERVER_IP! "mkdir -p !BACKUP_DIR!"
+
+ssh !SERVER_USER!@!SERVER_IP! "
+if [ -f !SERVER_PATH!/!JAR_NAME! ]; then
+ echo '[INFO] Creating backup of current version...'
+ cp !SERVER_PATH!/!JAR_NAME! !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!
+ echo '[SUCCESS] Backup created: !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!'
+ ls -la !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!
+else
+ echo '[INFO] No existing JAR file to backup (first deployment)'
+fi
+"
+
+REM 6. Stop application (if running)
+if !APP_STATUS! equ 0 (
+ echo.
+ echo =============== Stop Current Application ===============
+ echo [INFO] Gracefully stopping current application...
+ ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh stop"
+ if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Failed to stop application gracefully
+ set /p FORCE_STOP="Force stop and continue? (y/N): "
+ if /i not "!FORCE_STOP!"=="y" (
+ echo [INFO] Deployment cancelled
+ exit /b 1
+ )
+ echo [INFO] Attempting force stop...
+ ssh !SERVER_USER!@!SERVER_IP! "pkill -f !JAR_NAME! || true"
+ )
+ echo [SUCCESS] Application stopped
+)
+
+REM 7. Deploy new JAR
+echo.
+echo =============== Deploy New Version ===============
+echo [INFO] Transferring new JAR file...
+
+scp "!JAR_PATH!" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] File transfer failed
+ goto :deployment_failed
+)
+
+ssh !SERVER_USER!@!SERVER_IP! "chmod +x !SERVER_PATH!/!JAR_NAME!"
+echo [SUCCESS] New version deployed
+
+REM 8. Transfer version info
+if exist "target\version.txt" (
+ scp "target\version.txt" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
+)
+
+REM 9. Start new application
+echo.
+echo =============== Start New Application ===============
+echo [INFO] Starting new version...
+
+ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh start"
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Failed to start new application
+ goto :deployment_failed
+)
+
+REM 10. Verify deployment
+echo.
+echo =============== Verify Deployment ===============
+echo [INFO] Waiting for application startup (30 seconds)...
+timeout /t 30 /nobreak > nul
+
+ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] New application is not running properly
+ goto :deployment_failed
+)
+
+echo [INFO] Performing health check...
+ssh !SERVER_USER!@!SERVER_IP! "curl -f http://localhost:8090/actuator/health --max-time 10" 2>nul
+if !ERRORLEVEL! neq 0 (
+ echo [WARN] Health check failed, but application is running
+ echo [INFO] Manual verification recommended
+)
+
+REM 11. Success
+echo.
+echo =============== Deployment Successful ===============
+echo [SUCCESS] Safe deployment completed successfully!
+echo [INFO] Deployment time: !date! !time!
+echo [INFO] Backup: !JAR_NAME!.backup.!BACKUP_TIMESTAMP!
+echo [INFO] Dashboard: http://!SERVER_IP!:8090/static/admin/batch-admin.html
+echo.
+echo Quick commands:
+echo server-status.bat - Check status
+echo server-logs.bat tail - Monitor logs
+echo rollback.bat !BACKUP_TIMESTAMP! - Rollback if needed
+
+goto :end
+
+:deployment_failed
+echo.
+echo =============== Deployment Failed ===============
+echo [ERROR] Deployment failed!
+echo.
+set /p AUTO_ROLLBACK="Attempt automatic rollback? (y/N): "
+if /i "!AUTO_ROLLBACK!"=="y" (
+ if defined BACKUP_TIMESTAMP (
+ echo [INFO] Attempting rollback to: !BACKUP_TIMESTAMP!
+ call rollback.bat !BACKUP_TIMESTAMP!
+ ) else (
+ echo [ERROR] No backup available for automatic rollback
+ )
+) else (
+ echo [INFO] Manual recovery required
+ echo [INFO] Available backups:
+ ssh !SERVER_USER!@!SERVER_IP! "ls -la !BACKUP_DIR!/!JAR_NAME!.backup.* 2>/dev/null || echo 'No backups found'"
+)
+exit /b 1
+
+:end
+endlocal
\ No newline at end of file
diff --git a/scripts/diagnose-datasource-issue.sql b/scripts/diagnose-datasource-issue.sql
new file mode 100644
index 0000000..105cfc2
--- /dev/null
+++ b/scripts/diagnose-datasource-issue.sql
@@ -0,0 +1,139 @@
+-- DataSource 문제 진단 SQL
+-- 10.26.252.51과 10.29.17.90 양쪽에서 실행하여 비교
+
+-- ============================================
+-- 1. 현재 활성 연결 확인
+-- ============================================
+SELECT
+ pid,
+ usename,
+ application_name,
+ client_addr,
+ backend_start,
+ state,
+ query_start,
+ LEFT(query, 100) as current_query
+FROM pg_stat_activity
+WHERE datname IN ('mdadb', 'mpcdb2')
+AND application_name LIKE '%vessel%'
+ORDER BY backend_start DESC;
+
+-- ============================================
+-- 2. 최근 INSERT/UPDATE 통계 확인
+-- ============================================
+SELECT
+ schemaname,
+ tablename,
+ n_tup_ins as total_inserts,
+ n_tup_upd as total_updates,
+ n_tup_del as total_deletes,
+ n_live_tup as live_rows,
+ last_autoanalyze,
+ last_autovacuum
+FROM pg_stat_user_tables
+WHERE schemaname = 'signal'
+AND tablename IN (
+ 't_vessel_tracks_5min',
+ 't_vessel_tracks_hourly',
+ 't_vessel_tracks_daily',
+ 't_abnormal_tracks',
+ 't_vessel_latest_position'
+)
+ORDER BY n_tup_ins DESC;
+
+-- ============================================
+-- 3. 최근 데이터 확인 (마지막 INSERT 시간)
+-- ============================================
+
+-- 5분 집계
+SELECT
+ 'tracks_5min' as table_name,
+ COUNT(*) as total_rows,
+ MAX(time_bucket) as last_time_bucket,
+ NOW() - MAX(time_bucket) as data_delay
+FROM signal.t_vessel_tracks_5min;
+
+-- 시간 집계
+SELECT
+ 'tracks_hourly' as table_name,
+ COUNT(*) as total_rows,
+ MAX(time_bucket) as last_time_bucket,
+ NOW() - MAX(time_bucket) as data_delay
+FROM signal.t_vessel_tracks_hourly;
+
+-- 일 집계
+SELECT
+ 'tracks_daily' as table_name,
+ COUNT(*) as total_rows,
+ MAX(time_bucket) as last_time_bucket,
+ NOW() - MAX(time_bucket) as data_delay
+FROM signal.t_vessel_tracks_daily;
+
+-- 비정상 궤적
+SELECT
+ 'abnormal_tracks' as table_name,
+ COUNT(*) as total_rows,
+ MAX(time_bucket) as last_time_bucket,
+ NOW() - MAX(time_bucket) as data_delay
+FROM signal.t_abnormal_tracks;
+
+-- 최신 위치
+SELECT
+ 'latest_position' as table_name,
+ COUNT(*) as total_rows,
+ MAX(last_update) as last_update,
+ NOW() - MAX(last_update) as data_delay
+FROM signal.t_vessel_latest_position;
+
+-- ============================================
+-- 4. 특정 시간대 데이터 확인 (지난 1시간)
+-- ============================================
+SELECT
+ '5min_last_hour' as category,
+ COUNT(*) as count,
+ COUNT(DISTINCT sig_src_cd) as source_count,
+ COUNT(DISTINCT target_id) as vessel_count
+FROM signal.t_vessel_tracks_5min
+WHERE time_bucket >= NOW() - INTERVAL '1 hour';
+
+SELECT
+ 'hourly_last_day' as category,
+ COUNT(*) as count,
+ COUNT(DISTINCT sig_src_cd) as source_count,
+ COUNT(DISTINCT target_id) as vessel_count
+FROM signal.t_vessel_tracks_hourly
+WHERE time_bucket >= NOW() - INTERVAL '1 day';
+
+-- ============================================
+-- 5. 테이블 크기 확인
+-- ============================================
+SELECT
+ schemaname,
+ tablename,
+ pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
+ pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
+ pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) AS indexes_size
+FROM pg_tables
+WHERE schemaname = 'signal'
+AND tablename IN (
+ 't_vessel_tracks_5min',
+ 't_vessel_tracks_hourly',
+ 't_vessel_tracks_daily',
+ 't_abnormal_tracks',
+ 't_vessel_latest_position'
+)
+ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
+
+-- ============================================
+-- 6. 샘플 데이터 확인 (최근 10개)
+-- ============================================
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ point_count,
+ avg_speed,
+ max_speed
+FROM signal.t_vessel_tracks_5min
+ORDER BY time_bucket DESC
+LIMIT 10;
diff --git a/scripts/enable-sql-logging.yml b/scripts/enable-sql-logging.yml
new file mode 100644
index 0000000..e33ce7a
--- /dev/null
+++ b/scripts/enable-sql-logging.yml
@@ -0,0 +1,24 @@
+# application.yml 또는 application-prod.yml에 추가
+# 실제 SQL 에러를 확인하기 위한 로깅 설정
+
+logging:
+ level:
+ # PostgreSQL JDBC 드라이버 로그
+ org.postgresql: DEBUG
+ org.postgresql.Driver: DEBUG
+
+ # Spring JDBC 로그
+ org.springframework.jdbc: DEBUG
+ org.springframework.jdbc.core.JdbcTemplate: DEBUG
+ org.springframework.jdbc.core.StatementCreatorUtils: TRACE
+
+ # Spring Batch 로그
+ org.springframework.batch: DEBUG
+
+ # 배치 프로세서 로그
+ gc.mda.signal_batch.batch.processor: DEBUG
+ gc.mda.signal_batch.batch.processor.HourlyTrackProcessor: TRACE
+ gc.mda.signal_batch.batch.processor.DailyTrackProcessor: TRACE
+
+ # SQL 쿼리 파라미터 로깅
+ org.springframework.jdbc.core.namedparam: TRACE
diff --git a/scripts/fix-invalid-geometry.sql b/scripts/fix-invalid-geometry.sql
new file mode 100644
index 0000000..71fd0e7
--- /dev/null
+++ b/scripts/fix-invalid-geometry.sql
@@ -0,0 +1,122 @@
+-- Invalid geometry 수정 스크립트
+-- "Too few points" 에러를 해결하기 위해 단일 포인트를 2번 반복
+
+-- ========================================
+-- 1. 백업 (선택사항)
+-- ========================================
+-- CREATE TABLE signal.t_vessel_tracks_5min_backup_20251107 AS
+-- SELECT * FROM signal.t_vessel_tracks_5min
+-- WHERE track_geom IS NOT NULL AND NOT public.ST_IsValid(track_geom);
+
+-- ========================================
+-- 2. Invalid geometry 수정 (DRY RUN - 먼저 확인)
+-- ========================================
+SELECT
+ 'DRY RUN - Will fix these records' as action,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as current_points,
+ public.ST_AsText(track_geom) as current_wkt,
+ -- 수정 후 WKT 미리보기
+ CASE
+ WHEN public.ST_NPoints(track_geom) = 1 THEN
+ 'LINESTRING M(' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ')'
+ ELSE 'NO FIX NEEDED'
+ END as new_wkt
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+ AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%'
+LIMIT 10;
+
+-- ========================================
+-- 3. 실제 수정 (확인 후 실행)
+-- ========================================
+-- 주의: 이 쿼리는 실제 데이터를 변경합니다!
+-- DRY RUN 결과를 확인한 후 주석을 해제하고 실행하세요.
+
+/*
+UPDATE signal.t_vessel_tracks_5min
+SET track_geom = public.ST_GeomFromText(
+ 'LINESTRING M(' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
+ 4326
+)
+WHERE track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) = 1
+ AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
+*/
+
+-- ========================================
+-- 4. 수정 결과 확인
+-- ========================================
+SELECT
+ 'AFTER FIX' as status,
+ COUNT(*) as total_records,
+ COUNT(CASE WHEN public.ST_IsValid(track_geom) THEN 1 END) as valid_count,
+ COUNT(CASE WHEN NOT public.ST_IsValid(track_geom) THEN 1 END) as invalid_count
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL;
+
+-- ========================================
+-- 5. 여전히 Invalid한 geometry 확인
+-- ========================================
+SELECT
+ 'REMAINING INVALID' as status,
+ public.ST_IsValidReason(track_geom) as reason,
+ COUNT(*) as count
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+ AND NOT public.ST_IsValid(track_geom)
+GROUP BY public.ST_IsValidReason(track_geom);
+
+-- ========================================
+-- 6. Hourly 테이블도 동일하게 수정 (필요시)
+-- ========================================
+/*
+UPDATE signal.t_vessel_tracks_hourly
+SET track_geom = public.ST_GeomFromText(
+ 'LINESTRING M(' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
+ 4326
+)
+WHERE track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) = 1
+ AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
+*/
+
+-- ========================================
+-- 7. Daily 테이블도 동일하게 수정 (필요시)
+-- ========================================
+/*
+UPDATE signal.t_vessel_tracks_daily
+SET track_geom = public.ST_GeomFromText(
+ 'LINESTRING M(' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
+ public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
+ public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
+ 4326
+)
+WHERE track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) = 1
+ AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
+*/
diff --git a/scripts/fix-postgis-schema.ps1 b/scripts/fix-postgis-schema.ps1
new file mode 100644
index 0000000..c5f7191
--- /dev/null
+++ b/scripts/fix-postgis-schema.ps1
@@ -0,0 +1,24 @@
+# PostGIS 함수 스키마 명시 스크립트
+# ST_GeomFromText -> public.ST_GeomFromText로 변경
+
+$javaDir = "C:\Users\lht87\IdeaProjects\signal_batch\src\main\java"
+$files = Get-ChildItem -Path $javaDir -Filter "*.java" -Recurse
+
+$count = 0
+foreach ($file in $files) {
+ $content = Get-Content $file.FullName -Raw -Encoding UTF8
+
+ # ST_GeomFromText를 public.ST_GeomFromText로 변경 (이미 public.가 붙어있지 않은 경우만)
+ $newContent = $content -replace '(?/dev/null
+
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ Full backup created: $BACKUP_FILE${NC}"
+else
+ echo -e "${YELLOW}⚠ Backup may have failed, but continuing...${NC}"
+fi
+
+echo ""
+echo "2. Stopping application if running..."
+
+# PID 확인
+if [ -f "/devdata/apps/bridge-db-monitoring/vessel-batch.pid" ]; then
+ PID=$(cat /devdata/apps/bridge-db-monitoring/vessel-batch.pid)
+ if kill -0 $PID 2>/dev/null; then
+ echo " Stopping application (PID: $PID)..."
+ kill -15 $PID
+ sleep 5
+ if kill -0 $PID 2>/dev/null; then
+ echo " Force killing application..."
+ kill -9 $PID
+ fi
+ fi
+fi
+
+echo ""
+echo "3. FORCE resetting batch metadata tables..."
+
+# CASCADE를 사용한 강제 초기화
+psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME << EOF
+-- 트랜잭션 시작
+BEGIN;
+
+-- 외래 키 제약 임시 비활성화
+SET session_replication_role = 'replica';
+
+-- 모든 배치 테이블 강제 초기화
+TRUNCATE TABLE $DB_SCHEMA.batch_step_execution_context CASCADE;
+TRUNCATE TABLE $DB_SCHEMA.batch_step_execution CASCADE;
+TRUNCATE TABLE $DB_SCHEMA.batch_job_execution_context CASCADE;
+TRUNCATE TABLE $DB_SCHEMA.batch_job_execution_params CASCADE;
+TRUNCATE TABLE $DB_SCHEMA.batch_job_execution CASCADE;
+TRUNCATE TABLE $DB_SCHEMA.batch_job_instance CASCADE;
+
+-- 시퀀스 강제 리셋
+ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_job_execution_seq RESTART WITH 1;
+ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_job_seq RESTART WITH 1;
+ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_step_execution_seq RESTART WITH 1;
+
+-- 외래 키 제약 재활성화
+SET session_replication_role = 'origin';
+
+-- 커밋
+COMMIT;
+
+-- 통계 업데이트
+ANALYZE $DB_SCHEMA.batch_job_instance;
+ANALYZE $DB_SCHEMA.batch_job_execution;
+ANALYZE $DB_SCHEMA.batch_job_execution_params;
+ANALYZE $DB_SCHEMA.batch_job_execution_context;
+ANALYZE $DB_SCHEMA.batch_step_execution;
+ANALYZE $DB_SCHEMA.batch_step_execution_context;
+EOF
+
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ Batch metadata tables FORCE reset successfully${NC}"
+else
+ echo -e "${RED}✗ Force reset encountered errors, but may have partially succeeded${NC}"
+fi
+
+echo ""
+echo "4. Verifying force reset..."
+
+# 각 테이블 개별 확인
+for table in batch_job_instance batch_job_execution batch_job_execution_params batch_job_execution_context batch_step_execution batch_step_execution_context; do
+ COUNT=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -t -c "
+ SELECT COUNT(*) FROM $DB_SCHEMA.$table;" 2>/dev/null | xargs)
+
+ if [ -z "$COUNT" ]; then
+ COUNT="ERROR"
+ fi
+
+ if [ "$COUNT" = "0" ]; then
+ echo -e " ${GREEN}✓${NC} $table: $COUNT records"
+ elif [ "$COUNT" = "ERROR" ]; then
+ echo -e " ${RED}✗${NC} $table: Could not query"
+ else
+ echo -e " ${YELLOW}⚠${NC} $table: $COUNT records remaining"
+ fi
+done
+
+echo ""
+echo "5. Optional: Clear ALL aggregation data (complete fresh start)"
+read -p "Do you want to clear ALL aggregation data too? (yes/no): " CLEAR_ALL
+
+if [ "$CLEAR_ALL" = "yes" ]; then
+ echo ""
+ echo "Clearing ALL aggregation data..."
+
+ psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME << EOF
+BEGIN;
+
+-- 강제로 모든 집계 데이터 초기화
+SET session_replication_role = 'replica';
+
+-- 최신 위치 정보
+TRUNCATE TABLE signal.t_vessel_latest_position CASCADE;
+
+-- 모든 파티션 테이블 초기화
+DO \$\$
+DECLARE
+ r RECORD;
+BEGIN
+ FOR r IN
+ SELECT tablename
+ FROM pg_tables
+ WHERE schemaname = 'signal'
+ AND (tablename LIKE 't_tile_summary_%'
+ OR tablename LIKE 't_area_statistics_%'
+ OR tablename LIKE 't_vessel_daily_tracks_%')
+ LOOP
+ EXECUTE 'TRUNCATE TABLE signal.' || r.tablename || ' CASCADE';
+ RAISE NOTICE 'Truncated table: signal.%', r.tablename;
+ END LOOP;
+END\$\$;
+
+-- 배치 성능 메트릭
+TRUNCATE TABLE signal.t_batch_performance_metrics CASCADE;
+
+SET session_replication_role = 'origin';
+
+COMMIT;
+EOF
+
+ echo -e "${GREEN}✓ All aggregation data cleared${NC}"
+fi
+
+echo ""
+echo "================================================"
+echo "FORCE Reset Complete!"
+echo ""
+echo -e "${YELLOW}IMPORTANT: The application needs to be restarted!${NC}"
+echo ""
+echo "Next steps:"
+echo "1. Start the application:"
+echo " cd /devdata/apps/bridge-db-monitoring"
+echo " ./run-on-query-server.sh"
+echo ""
+echo "2. Verify health:"
+echo " curl http://localhost:8090/actuator/health"
+echo ""
+echo "3. Start fresh batch job:"
+echo " curl -X POST http://localhost:8090/admin/batch/job/run \\"
+echo " -H 'Content-Type: application/json' \\"
+echo " -d '{\"jobName\": \"vesselAggregationJob\", \"parameters\": {\"tileLevel\": 1}}'"
+echo ""
+echo "Full backup saved to: $BACKUP_FILE"
+echo "================================================"
+
+# 자동 시작 옵션
+echo ""
+read -p "Do you want to start the application now? (yes/no): " START_NOW
+
+if [ "$START_NOW" = "yes" ]; then
+ echo "Starting application..."
+ cd /devdata/apps/bridge-db-monitoring
+ ./run-on-query-server.sh
+fi
diff --git a/scripts/install-postgis-in-signal-schema.sql b/scripts/install-postgis-in-signal-schema.sql
new file mode 100644
index 0000000..5bc71fc
--- /dev/null
+++ b/scripts/install-postgis-in-signal-schema.sql
@@ -0,0 +1,59 @@
+-- PostGIS를 signal 스키마에 설치하는 스크립트
+-- 10.29.17.90 서버의 mpcdb2 데이터베이스에서 실행
+
+-- 방법 1: signal 스키마에 PostGIS extension 생성 (권장)
+-- 이미 public에 설치되어 있다면, signal 스키마에 함수들을 복사하는 방식으로 접근
+
+-- 현재 PostGIS 상태 확인
+SELECT extname, extversion, nspname
+FROM pg_extension e
+JOIN pg_namespace n ON e.extnamespace = n.oid
+WHERE extname LIKE 'post%';
+
+-- 옵션 1: signal 스키마에 PostGIS 함수 wrapper 생성
+-- (public 스키마의 함수를 호출하는 wrapper)
+CREATE OR REPLACE FUNCTION signal.ST_GeomFromText(text)
+RETURNS geometry
+AS $$
+ SELECT public.ST_GeomFromText($1);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+CREATE OR REPLACE FUNCTION signal.ST_GeomFromText(text, integer)
+RETURNS geometry
+AS $$
+ SELECT public.ST_GeomFromText($1, $2);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+CREATE OR REPLACE FUNCTION signal.ST_Length(geometry)
+RETURNS double precision
+AS $$
+ SELECT public.ST_Length($1);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+CREATE OR REPLACE FUNCTION signal.ST_MakeLine(geometry[])
+RETURNS geometry
+AS $$
+ SELECT public.ST_MakeLine($1);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+-- 자주 사용하는 다른 함수들도 추가
+CREATE OR REPLACE FUNCTION signal.ST_X(geometry)
+RETURNS double precision
+AS $$
+ SELECT public.ST_X($1);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+CREATE OR REPLACE FUNCTION signal.ST_Y(geometry)
+RETURNS double precision
+AS $$
+ SELECT public.ST_Y($1);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+CREATE OR REPLACE FUNCTION signal.ST_M(geometry)
+RETURNS double precision
+AS $$
+ SELECT public.ST_M($1);
+$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
+
+-- 검증
+SELECT signal.ST_GeomFromText('POINT(126.0 37.0)', 4326);
diff --git a/scripts/list-failed-jobs.sql b/scripts/list-failed-jobs.sql
new file mode 100644
index 0000000..617510e
--- /dev/null
+++ b/scripts/list-failed-jobs.sql
@@ -0,0 +1,85 @@
+-- 실패한 배치 Job 조회 및 분석
+
+-- 1. 실패한 Job 목록 (최근 50개)
+SELECT
+ '=== FAILED JOBS (Recent 50) ===' as category,
+ bje.JOB_EXECUTION_ID,
+ bji.JOB_NAME,
+ bje.START_TIME,
+ bje.END_TIME,
+ bje.STATUS,
+ bje.EXIT_CODE,
+ LEFT(bje.EXIT_MESSAGE, 100) as EXIT_MESSAGE_SHORT,
+ -- Job Parameters 표시
+ (SELECT string_agg(PARAMETER_NAME || '=' || PARAMETER_VALUE, ', ')
+ FROM BATCH_JOB_EXECUTION_PARAMS
+ WHERE JOB_EXECUTION_ID = bje.JOB_EXECUTION_ID
+ AND IDENTIFYING = 'Y') as PARAMETERS
+FROM BATCH_JOB_EXECUTION bje
+JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
+WHERE bje.STATUS = 'FAILED'
+ORDER BY bje.JOB_EXECUTION_ID DESC
+LIMIT 50;
+
+-- 2. 실패한 Step 상세 정보
+SELECT
+ '=== FAILED STEPS ===' as category,
+ bse.STEP_EXECUTION_ID,
+ bse.JOB_EXECUTION_ID,
+ bji.JOB_NAME,
+ bse.STEP_NAME,
+ bse.STATUS,
+ bse.READ_COUNT,
+ bse.WRITE_COUNT,
+ bse.COMMIT_COUNT,
+ bse.ROLLBACK_COUNT,
+ bse.READ_SKIP_COUNT,
+ bse.PROCESS_SKIP_COUNT,
+ bse.WRITE_SKIP_COUNT,
+ LEFT(bse.EXIT_MESSAGE, 100) as EXIT_MESSAGE_SHORT
+FROM BATCH_STEP_EXECUTION bse
+JOIN BATCH_JOB_EXECUTION bje ON bse.JOB_EXECUTION_ID = bje.JOB_EXECUTION_ID
+JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
+WHERE bse.STATUS = 'FAILED'
+ORDER BY bse.STEP_EXECUTION_ID DESC
+LIMIT 50;
+
+-- 3. Job 타입별 실패 통계
+SELECT
+ '=== FAILURE STATISTICS BY JOB ===' as category,
+ bji.JOB_NAME,
+ COUNT(*) as FAILED_COUNT,
+ MAX(bje.END_TIME) as LAST_FAILURE_TIME
+FROM BATCH_JOB_EXECUTION bje
+JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
+WHERE bje.STATUS = 'FAILED'
+GROUP BY bji.JOB_NAME
+ORDER BY FAILED_COUNT DESC;
+
+-- 4. Step별 실패 통계
+SELECT
+ '=== FAILURE STATISTICS BY STEP ===' as category,
+ STEP_NAME,
+ COUNT(*) as FAILED_COUNT,
+ MAX(END_TIME) as LAST_FAILURE_TIME
+FROM BATCH_STEP_EXECUTION
+WHERE STATUS = 'FAILED'
+GROUP BY STEP_NAME
+ORDER BY FAILED_COUNT DESC;
+
+-- 5. 최근 24시간 실패 현황
+SELECT
+ '=== LAST 24 HOURS ===' as category,
+ COUNT(*) as FAILED_JOBS_24H
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS = 'FAILED'
+ AND START_TIME >= CURRENT_TIMESTAMP - INTERVAL '24 hours';
+
+-- 6. 전체 상태 요약
+SELECT
+ '=== STATUS SUMMARY ===' as category,
+ STATUS,
+ COUNT(*) as COUNT
+FROM BATCH_JOB_EXECUTION
+GROUP BY STATUS
+ORDER BY COUNT DESC;
diff --git a/scripts/mark-failed-jobs-as-abandoned.sql b/scripts/mark-failed-jobs-as-abandoned.sql
new file mode 100644
index 0000000..08922ab
--- /dev/null
+++ b/scripts/mark-failed-jobs-as-abandoned.sql
@@ -0,0 +1,75 @@
+-- 실패한 배치 Job과 Step을 ABANDONED 상태로 변경
+-- 주의: 이 스크립트는 실패한 job을 강제로 종료시킵니다.
+-- 재시도가 필요한 경우 이 스크립트를 실행하지 마세요.
+
+-- 1. 현재 실패 상태 확인
+SELECT
+ '=== BEFORE UPDATE ===' as status,
+ COUNT(*) as failed_jobs
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS = 'FAILED';
+
+SELECT
+ '=== BEFORE UPDATE ===' as status,
+ COUNT(*) as failed_steps
+FROM BATCH_STEP_EXECUTION
+WHERE STATUS = 'FAILED';
+
+-- 2. 실패한 STEP을 ABANDONED로 변경
+UPDATE BATCH_STEP_EXECUTION
+SET
+ STATUS = 'ABANDONED',
+ EXIT_CODE = 'ABANDONED',
+ EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: FAILED',
+ END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
+ LAST_UPDATED = CURRENT_TIMESTAMP
+WHERE STATUS = 'FAILED';
+
+-- 3. 실패한 JOB을 ABANDONED로 변경
+UPDATE BATCH_JOB_EXECUTION
+SET
+ STATUS = 'ABANDONED',
+ EXIT_CODE = 'ABANDONED',
+ EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: FAILED',
+ END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
+ LAST_UPDATED = CURRENT_TIMESTAMP
+WHERE STATUS = 'FAILED';
+
+-- 4. 업데이트 후 상태 확인
+SELECT
+ '=== AFTER UPDATE ===' as status,
+ COUNT(*) as failed_jobs
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS = 'FAILED';
+
+SELECT
+ '=== AFTER UPDATE ===' as status,
+ COUNT(*) as failed_steps
+FROM BATCH_STEP_EXECUTION
+WHERE STATUS = 'FAILED';
+
+SELECT
+ '=== ABANDONED COUNT ===' as status,
+ COUNT(*) as abandoned_jobs
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS = 'ABANDONED';
+
+SELECT
+ '=== ABANDONED COUNT ===' as status,
+ COUNT(*) as abandoned_steps
+FROM BATCH_STEP_EXECUTION
+WHERE STATUS = 'ABANDONED';
+
+-- 5. 최근 ABANDONED 처리된 Job 목록 확인
+SELECT
+ JOB_EXECUTION_ID,
+ JOB_INSTANCE_ID,
+ START_TIME,
+ END_TIME,
+ STATUS,
+ EXIT_CODE,
+ EXIT_MESSAGE
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS = 'ABANDONED'
+ORDER BY JOB_EXECUTION_ID DESC
+LIMIT 10;
diff --git a/scripts/mark-specific-job-as-abandoned.sql b/scripts/mark-specific-job-as-abandoned.sql
new file mode 100644
index 0000000..f6fc0d5
--- /dev/null
+++ b/scripts/mark-specific-job-as-abandoned.sql
@@ -0,0 +1,75 @@
+-- 특정 JOB_EXECUTION_ID를 ABANDONED로 변경
+-- 사용법: :job_execution_id 를 실제 ID로 변경 후 실행
+
+-- 변수 설정 (PostgreSQL에서는 psql 변수 사용)
+-- psql -v job_id=12345 -f mark-specific-job-as-abandoned.sql
+-- 또는 아래 :job_execution_id 를 직접 숫자로 변경
+
+-- 1. 해당 Job 상태 확인
+SELECT
+ '=== BEFORE UPDATE ===' as status,
+ JOB_EXECUTION_ID,
+ JOB_INSTANCE_ID,
+ START_TIME,
+ END_TIME,
+ STATUS,
+ EXIT_CODE,
+ EXIT_MESSAGE
+FROM BATCH_JOB_EXECUTION
+WHERE JOB_EXECUTION_ID = :job_execution_id;
+
+-- 2. 해당 Job의 Step들 상태 확인
+SELECT
+ '=== STEPS BEFORE UPDATE ===' as status,
+ STEP_EXECUTION_ID,
+ STEP_NAME,
+ STATUS,
+ EXIT_CODE
+FROM BATCH_STEP_EXECUTION
+WHERE JOB_EXECUTION_ID = :job_execution_id
+ORDER BY STEP_EXECUTION_ID;
+
+-- 3. Step을 ABANDONED로 변경
+UPDATE BATCH_STEP_EXECUTION
+SET
+ STATUS = 'ABANDONED',
+ EXIT_CODE = 'ABANDONED',
+ EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: ' || STATUS,
+ END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
+ LAST_UPDATED = CURRENT_TIMESTAMP
+WHERE JOB_EXECUTION_ID = :job_execution_id
+ AND STATUS IN ('FAILED', 'STARTED', 'STOPPING');
+
+-- 4. Job을 ABANDONED로 변경
+UPDATE BATCH_JOB_EXECUTION
+SET
+ STATUS = 'ABANDONED',
+ EXIT_CODE = 'ABANDONED',
+ EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: ' || STATUS,
+ END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
+ LAST_UPDATED = CURRENT_TIMESTAMP
+WHERE JOB_EXECUTION_ID = :job_execution_id
+ AND STATUS IN ('FAILED', 'STARTED', 'STOPPING');
+
+-- 5. 업데이트 결과 확인
+SELECT
+ '=== AFTER UPDATE ===' as status,
+ JOB_EXECUTION_ID,
+ JOB_INSTANCE_ID,
+ START_TIME,
+ END_TIME,
+ STATUS,
+ EXIT_CODE,
+ EXIT_MESSAGE
+FROM BATCH_JOB_EXECUTION
+WHERE JOB_EXECUTION_ID = :job_execution_id;
+
+SELECT
+ '=== STEPS AFTER UPDATE ===' as status,
+ STEP_EXECUTION_ID,
+ STEP_NAME,
+ STATUS,
+ EXIT_CODE
+FROM BATCH_STEP_EXECUTION
+WHERE JOB_EXECUTION_ID = :job_execution_id
+ORDER BY STEP_EXECUTION_ID;
diff --git a/scripts/monitor-query-server.sh b/scripts/monitor-query-server.sh
new file mode 100644
index 0000000..6ce7a83
--- /dev/null
+++ b/scripts/monitor-query-server.sh
@@ -0,0 +1,212 @@
+#!/bin/bash
+
+# Query DB 서버 리소스 모니터링 스크립트
+# PostgreSQL과 배치 애플리케이션 리소스 경합 모니터링
+
+# 애플리케이션 경로
+APP_HOME="/devdata/apps/bridge-db-monitoring"
+LOG_DIR="$APP_HOME/logs"
+mkdir -p $LOG_DIR
+
+# Java 경로 (jstat 명령어용)
+JAVA_HOME="/devdata/apps/jdk-17.0.8"
+JSTAT="$JAVA_HOME/bin/jstat"
+
+# 색상 코드
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+# CSV 헤더 생성 (첫 실행 시)
+if [ ! -f "$LOG_DIR/resource-monitor.csv" ]; then
+ echo "timestamp,pg_cpu,java_cpu,delay_minutes,throughput,collect_connections" > $LOG_DIR/resource-monitor.csv
+fi
+
+while true; do
+ clear
+ echo "========================================="
+ echo "Vessel Batch Resource Monitor"
+ echo "Time: $(date)"
+ echo "App Home: $APP_HOME"
+ echo "========================================="
+
+ # PID 파일에서 프로세스 ID 읽기
+ if [ -f "$APP_HOME/vessel-batch.pid" ]; then
+ JAVA_PID=$(cat $APP_HOME/vessel-batch.pid)
+ else
+ JAVA_PID=$(pgrep -f "vessel-batch-aggregation.jar")
+ fi
+
+ # 1. CPU 사용률
+ echo -e "\n${GREEN}[CPU Usage]${NC}"
+ # PostgreSQL CPU 사용률
+ PG_CPU=$(ps aux | grep postgres | grep -v grep | awk '{sum+=$3} END {printf "%.1f", sum}' || echo "0")
+ if [ -z "$PG_CPU" ]; then PG_CPU="0"; fi
+ echo "PostgreSQL Total: ${PG_CPU}%"
+
+ # Java 배치 CPU 사용률
+ if [ ! -z "$JAVA_PID" ] && kill -0 $JAVA_PID 2>/dev/null; then
+ JAVA_CPU=$(ps aux | grep $JAVA_PID | grep -v grep | awk '{printf "%.1f", $3}' || echo "0")
+ if [ -z "$JAVA_CPU" ]; then JAVA_CPU="0"; fi
+ echo "Batch Application: ${JAVA_CPU}% (PID: $JAVA_PID)"
+ else
+ JAVA_CPU="0.0"
+ echo "Batch Application: Not Running"
+ fi
+
+ # Top 5 PostgreSQL 프로세스
+ echo -e "\nTop PostgreSQL Processes:"
+ ps aux | grep postgres | grep -v grep | sort -k3 -nr | head -5 | awk '{printf " %-8s %5s%% %s\n", $2, $3, $11}'
+
+ # 2. 메모리 사용률
+ echo -e "\n${GREEN}[Memory Usage]${NC}"
+ free -h | grep -E "Mem|Swap"
+
+ # PostgreSQL 공유 메모리
+ PG_SHARED=$(ipcs -m 2>/dev/null | grep postgres | awk '{sum+=$5} END {printf "%.1f", sum/1024/1024/1024}')
+ if [ ! -z "$PG_SHARED" ]; then
+ echo "PostgreSQL Shared Memory: ${PG_SHARED}GB"
+ fi
+
+ # Java 힙 사용률
+ if [ ! -z "$JAVA_PID" ] && kill -0 $JAVA_PID 2>/dev/null; then
+ if [ -x "$JSTAT" ]; then
+ JAVA_HEAP=$($JSTAT -gc $JAVA_PID 2>/dev/null | tail -1 | awk '{printf "%.1f", ($3+$4+$6+$8)/1024}')
+ if [ ! -z "$JAVA_HEAP" ]; then
+ echo "Java Heap Used: ${JAVA_HEAP}MB"
+ fi
+ fi
+ fi
+
+ # 3. 디스크 I/O
+ echo -e "\n${GREEN}[Disk I/O]${NC}"
+ iostat -x 1 2 2>/dev/null | grep -A5 "Device" | tail -n +7 | head -5
+
+ # 4. PostgreSQL 연결 상태
+ echo -e "\n${GREEN}[Database Connections]${NC}"
+ # psql 명령어가 PATH에 없을 수 있으므로 전체 경로 사용 시도
+ if command -v psql >/dev/null 2>&1; then
+ PSQL_CMD="psql"
+ else
+ # 일반적인 PostgreSQL 설치 경로들
+ for path in /usr/pgsql-*/bin/psql /usr/bin/psql /usr/local/bin/psql; do
+ if [ -x "$path" ]; then
+ PSQL_CMD="$path"
+ break
+ fi
+ done
+ fi
+
+ if [ ! -z "$PSQL_CMD" ]; then
+ $PSQL_CMD -h localhost -U mda -d mdadb -c "
+ SELECT
+ application_name,
+ client_addr,
+ COUNT(*) as connections,
+ string_agg(DISTINCT state, ', ') as states
+ FROM pg_stat_activity
+ WHERE datname = 'mdadb'
+ GROUP BY application_name, client_addr
+ ORDER BY connections DESC
+ LIMIT 10;" 2>/dev/null || echo "Unable to query database connections"
+ else
+ echo "psql command not found"
+ fi
+
+ # 5. 배치 처리 상태
+ echo -e "\n${GREEN}[Batch Processing Status]${NC}"
+
+ if [ ! -z "$PSQL_CMD" ]; then
+ # 처리 지연 확인
+ DELAY=$($PSQL_CMD -h localhost -U mda -d mdadb -t -c "
+ SELECT COALESCE(EXTRACT(EPOCH FROM (NOW() - MAX(last_update))) / 60, 0)::numeric(10,1)
+ FROM signal.t_vessel_latest_position;" 2>/dev/null | xargs)
+
+ if [ ! -z "$DELAY" ] && [ "$DELAY" != "" ]; then
+ if [ $(echo "$DELAY > 120" | bc 2>/dev/null || echo 0) -eq 1 ]; then
+ echo -e "${RED}Processing Delay: ${DELAY} minutes ⚠️${NC}"
+ elif [ $(echo "$DELAY > 60" | bc 2>/dev/null || echo 0) -eq 1 ]; then
+ echo -e "${YELLOW}Processing Delay: ${DELAY} minutes ⚠️${NC}"
+ else
+ echo -e "${GREEN}Processing Delay: ${DELAY} minutes ✓${NC}"
+ fi
+ else
+ DELAY="0"
+ echo "Processing Delay: Unable to determine"
+ fi
+
+ # 최근 처리량
+ THROUGHPUT=$($PSQL_CMD -h localhost -U mda -d mdadb -t -c "
+ SELECT COALESCE(COUNT(*), 0)
+ FROM signal.t_vessel_latest_position
+ WHERE last_update > NOW() - INTERVAL '1 minute';" 2>/dev/null | xargs)
+
+ if [ ! -z "$THROUGHPUT" ]; then
+ echo "Throughput: ${THROUGHPUT} vessels/minute"
+ else
+ THROUGHPUT="0"
+ echo "Throughput: Unable to determine"
+ fi
+ else
+ DELAY="0"
+ THROUGHPUT="0"
+ echo "Database metrics unavailable (psql not found)"
+ fi
+
+ # 6. 네트워크 연결 (수집 DB)
+ echo -e "\n${GREEN}[Network to Collect DB]${NC}"
+ COLLECT_CONN=$(ss -tunp 2>/dev/null | grep :5432 | grep 10.26.252.39 | wc -l)
+ echo "Active connections to collect DB: ${COLLECT_CONN}"
+
+ # 네트워크 통계
+ if [ "$COLLECT_CONN" -gt 0 ]; then
+ ss -i dst 10.26.252.39:5432 2>/dev/null | grep -E "rtt|cwnd" | head -3
+ fi
+
+ # 7. 애플리케이션 로그 최근 에러
+ echo -e "\n${GREEN}[Recent Application Errors]${NC}"
+ if [ -f "$LOG_DIR/app.log" ]; then
+ ERROR_COUNT=$(grep -c "ERROR" $LOG_DIR/app.log 2>/dev/null || echo 0)
+ echo "Total Errors in Log: $ERROR_COUNT"
+
+ # 최근 5개 에러 표시
+ if [ "$ERROR_COUNT" -gt 0 ]; then
+ echo "Recent Errors:"
+ grep "ERROR" $LOG_DIR/app.log | tail -5 | cut -c1-120
+ fi
+ else
+ echo "Log file not found at $LOG_DIR/app.log"
+ fi
+
+ # 8. 경고 사항
+ echo -e "\n${YELLOW}[Warnings]${NC}"
+
+ # CPU 경고
+ TOTAL_CPU=$(echo "$PG_CPU + $JAVA_CPU" | bc 2>/dev/null || echo "0")
+ if [ ! -z "$TOTAL_CPU" ] && [ "$TOTAL_CPU" != "0" ]; then
+ if [ $(echo "$TOTAL_CPU > 80" | bc 2>/dev/null || echo 0) -eq 1 ]; then
+ echo -e "${RED}⚠ High CPU usage: ${TOTAL_CPU}%${NC}"
+ fi
+ fi
+
+ # 메모리 경고
+ MEM_AVAILABLE=$(free -g | grep Mem | awk '{print $7}')
+ if [ ! -z "$MEM_AVAILABLE" ] && [ "$MEM_AVAILABLE" -lt 10 ]; then
+ echo -e "${RED}⚠ Low available memory: ${MEM_AVAILABLE}GB${NC}"
+ fi
+
+ # 처리 지연 경고
+ if [ ! -z "$DELAY" ] && [ "$DELAY" != "0" ]; then
+ if [ $(echo "$DELAY > 120" | bc 2>/dev/null || echo 0) -eq 1 ]; then
+ echo -e "${RED}⚠ Processing delay exceeds 2 hours!${NC}"
+ fi
+ fi
+
+ # 로그에 기록
+ echo "$(date '+%Y-%m-%d %H:%M:%S'),${PG_CPU},${JAVA_CPU},${DELAY},${THROUGHPUT},${COLLECT_CONN}" >> $LOG_DIR/resource-monitor.csv
+
+ # 다음 업데이트까지 대기
+ echo -e "\n${GREEN}Next update in 30 seconds... (Ctrl+C to exit)${NC}"
+ sleep 30
+done
diff --git a/scripts/monitor-realtime.sh b/scripts/monitor-realtime.sh
new file mode 100644
index 0000000..24be36a
--- /dev/null
+++ b/scripts/monitor-realtime.sh
@@ -0,0 +1,154 @@
+#!/bin/bash
+
+# 실시간 시스템 모니터링 스크립트
+# 부하 테스트 중 시스템 상태를 실시간으로 모니터링
+
+# 색상 정의
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+BLUE='\033[0;34m'
+NC='\033[0m' # No Color
+
+# 애플리케이션 정보
+APP_HOST="10.26.252.48"
+APP_PORT="8090"
+DB_HOST_COLLECT="10.26.252.39"
+DB_HOST_QUERY="10.26.252.48"
+DB_PORT="5432"
+DB_NAME="mdadb"
+DB_USER="mdauser"
+
+# 화면 지우기
+clear_screen() {
+ clear
+}
+
+# 헤더 출력
+print_header() {
+ echo -e "${BLUE}========================================${NC}"
+ echo -e "${BLUE} 선박 궤적 시스템 실시간 모니터링 ${NC}"
+ echo -e "${BLUE}========================================${NC}"
+ echo -e "시간: $(date '+%Y-%m-%d %H:%M:%S')"
+ echo ""
+}
+
+# 애플리케이션 상태 확인
+check_app_status() {
+ echo -e "${GREEN}[애플리케이션 상태]${NC}"
+
+ # Health check
+ health=$(curl -s "http://$APP_HOST:$APP_PORT/actuator/health" | jq -r '.status' 2>/dev/null || echo "UNKNOWN")
+ if [ "$health" == "UP" ]; then
+ echo -e "상태: ${GREEN}$health${NC}"
+ else
+ echo -e "상태: ${RED}$health${NC}"
+ fi
+
+ # 실행 중인 Job
+ running_jobs=$(curl -s "http://$APP_HOST:$APP_PORT/admin/batch/job/running" | jq -r '.[]' 2>/dev/null || echo "N/A")
+ echo -e "실행 중인 Job: $running_jobs"
+
+ # 메트릭 요약
+ metrics=$(curl -s "http://$APP_HOST:$APP_PORT/admin/metrics/summary" 2>/dev/null)
+ if [ ! -z "$metrics" ]; then
+ echo -e "처리된 레코드: $(echo $metrics | jq -r '.processedRecords // "N/A"')"
+ echo -e "평균 처리 시간: $(echo $metrics | jq -r '.avgProcessingTime // "N/A"')ms"
+ fi
+ echo ""
+}
+
+# 시스템 리소스 모니터링
+check_system_resources() {
+ echo -e "${GREEN}[시스템 리소스]${NC}"
+
+ # CPU 사용률
+ cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
+ echo -e "CPU 사용률: ${cpu_usage}%"
+
+ # 메모리 사용률
+ mem_info=$(free -g | grep "Mem:")
+ mem_total=$(echo $mem_info | awk '{print $2}')
+ mem_used=$(echo $mem_info | awk '{print $3}')
+ mem_percent=$(awk "BEGIN {printf \"%.1f\", ($mem_used/$mem_total)*100}")
+ echo -e "메모리: ${mem_used}GB / ${mem_total}GB (${mem_percent}%)"
+
+ # 디스크 사용률
+ disk_usage=$(df -h / | tail -1 | awk '{print $5}')
+ echo -e "디스크 사용률: $disk_usage"
+ echo ""
+}
+
+# 데이터베이스 연결 모니터링
+check_db_connections() {
+ echo -e "${GREEN}[데이터베이스 연결]${NC}"
+
+ # CollectDB 연결
+ collect_conn=$(PGPASSWORD=$DB_PASS psql -h $DB_HOST_COLLECT -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null || echo "N/A")
+ echo -e "CollectDB 연결: $collect_conn"
+
+ # QueryDB 연결
+ query_conn=$(PGPASSWORD=$DB_PASS psql -h $DB_HOST_QUERY -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null || echo "N/A")
+ echo -e "QueryDB 연결: $query_conn"
+ echo ""
+}
+
+# WebSocket 연결 모니터링
+check_websocket_status() {
+ echo -e "${GREEN}[WebSocket 상태]${NC}"
+
+ ws_status=$(curl -s "http://$APP_HOST:$APP_PORT/api/websocket/status" 2>/dev/null)
+ if [ ! -z "$ws_status" ]; then
+ echo -e "활성 세션: $(echo $ws_status | jq -r '.activeSessions // "N/A"')"
+ echo -e "활성 쿼리: $(echo $ws_status | jq -r '.activeQueries // "N/A"')"
+ echo -e "처리된 메시지: $(echo $ws_status | jq -r '.totalMessagesProcessed // "N/A"')"
+ else
+ echo -e "WebSocket 상태를 가져올 수 없습니다."
+ fi
+ echo ""
+}
+
+# 성능 최적화 상태
+check_performance_status() {
+ echo -e "${GREEN}[성능 최적화 상태]${NC}"
+
+ perf_status=$(curl -s "http://$APP_HOST:$APP_PORT/api/v1/performance/status" 2>/dev/null)
+ if [ ! -z "$perf_status" ]; then
+ echo -e "동적 청크 크기: $(echo $perf_status | jq -r '.currentChunkSize // "N/A"')"
+ echo -e "캐시 히트율: $(echo $perf_status | jq -r '.cacheHitRate // "N/A"')%"
+ echo -e "메모리 사용률: $(echo $perf_status | jq -r '.memoryUsage.usedPercentage // "N/A"')%"
+ else
+ echo -e "성능 상태를 가져올 수 없습니다."
+ fi
+ echo ""
+}
+
+# 실시간 로그 tail (별도 터미널에서 실행)
+tail_logs() {
+ echo -e "${GREEN}[최근 로그]${NC}"
+ echo "애플리케이션 로그는 별도 터미널에서 확인하세요:"
+ echo "tail -f /path/to/application.log"
+ echo ""
+}
+
+# 메인 루프
+main() {
+ while true; do
+ clear_screen
+ print_header
+ check_app_status
+ check_system_resources
+ check_db_connections
+ check_websocket_status
+ check_performance_status
+
+ echo -e "${YELLOW}5초 후 갱신... (Ctrl+C로 종료)${NC}"
+ sleep 5
+ done
+}
+
+# 트랩 설정
+trap 'echo -e "\n${RED}모니터링 종료${NC}"; exit 0' INT TERM
+
+# 실행
+main
diff --git a/scripts/quick-check-invalid.sql b/scripts/quick-check-invalid.sql
new file mode 100644
index 0000000..20f116c
--- /dev/null
+++ b/scripts/quick-check-invalid.sql
@@ -0,0 +1,50 @@
+-- 빠른 Invalid Geometry 확인
+
+-- 1. t_vessel_tracks_5min에 실제로 invalid geometry가 있는가?
+SELECT
+ '5min table - invalid count' as check_type,
+ COUNT(*) as invalid_count
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+ AND NOT public.ST_IsValid(track_geom);
+
+-- 2. 어떤 invalid 이유인가?
+SELECT
+ '5min table - invalid reasons' as check_type,
+ public.ST_IsValidReason(track_geom) as reason,
+ COUNT(*) as count
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+ AND NOT public.ST_IsValid(track_geom)
+GROUP BY public.ST_IsValidReason(track_geom);
+
+-- 3. 실제 invalid 샘플 확인
+SELECT
+ '5min table - invalid samples' as check_type,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as point_count,
+ public.ST_AsText(track_geom) as wkt,
+ public.ST_IsValidReason(track_geom) as reason
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+ AND NOT public.ST_IsValid(track_geom)
+LIMIT 5;
+
+-- 4. 에러 발생한 선박 확인 (vessel 000001_###0000072)
+SELECT
+ 'Problem vessel check' as check_type,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as point_count,
+ public.ST_IsValid(track_geom) as is_valid,
+ public.ST_IsValidReason(track_geom) as reason,
+ public.ST_AsText(track_geom) as wkt
+FROM signal.t_vessel_tracks_5min
+WHERE sig_src_cd = '000001'
+ AND target_id LIKE '%0000072'
+ AND time_bucket >= CURRENT_TIMESTAMP - INTERVAL '1 day'
+ORDER BY time_bucket DESC
+LIMIT 10;
diff --git a/scripts/quick-test-real-data.sql b/scripts/quick-test-real-data.sql
new file mode 100644
index 0000000..63ef138
--- /dev/null
+++ b/scripts/quick-test-real-data.sql
@@ -0,0 +1,269 @@
+-- ========================================
+-- 실제 데이터로 즉시 테스트 (변수 없음)
+-- 최근 데이터 자동 선택
+-- ========================================
+
+-- 1. 최근 1시간 내 데이터가 있는 선박 자동 선택
+WITH recent_vessel AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
+ LIMIT 1
+)
+SELECT
+ '=== AUTO SELECTED VESSEL ===' as section,
+ sig_src_cd,
+ target_id,
+ hour_bucket,
+ hour_bucket + INTERVAL '1 hour' as hour_end
+FROM recent_vessel;
+
+-- 2. 선택된 선박의 5분 데이터 확인
+WITH recent_vessel AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
+ LIMIT 1
+)
+SELECT
+ '=== 5MIN DATA ===' as section,
+ t.sig_src_cd,
+ t.target_id,
+ t.time_bucket,
+ public.ST_NPoints(t.track_geom) as points,
+ public.ST_IsValid(t.track_geom) as is_valid,
+ LENGTH(public.ST_AsText(t.track_geom)) as wkt_length,
+ substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)') as extracted_coords
+FROM signal.t_vessel_tracks_5min t
+INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
+WHERE t.time_bucket >= rv.hour_bucket
+ AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
+ AND t.track_geom IS NOT NULL
+ AND public.ST_NPoints(t.track_geom) > 0
+ORDER BY t.time_bucket;
+
+-- 3. string_agg 테스트
+WITH recent_vessel AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
+ LIMIT 1
+)
+SELECT
+ '=== STRING_AGG RESULT ===' as section,
+ t.sig_src_cd,
+ t.target_id,
+ string_agg(
+ substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
+ ','
+ ORDER BY t.time_bucket
+ ) FILTER (WHERE t.track_geom IS NOT NULL) as all_coords,
+ COUNT(*) as track_count,
+ LENGTH(string_agg(
+ substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
+ ','
+ ORDER BY t.time_bucket
+ ) FILTER (WHERE t.track_geom IS NOT NULL)) as coords_total_length
+FROM signal.t_vessel_tracks_5min t
+INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
+WHERE t.time_bucket >= rv.hour_bucket
+ AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
+ AND t.track_geom IS NOT NULL
+ AND public.ST_NPoints(t.track_geom) > 0
+GROUP BY t.sig_src_cd, t.target_id;
+
+-- 4. Geometry 생성 테스트
+WITH recent_vessel AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
+ LIMIT 1
+),
+merged_coords AS (
+ SELECT
+ t.sig_src_cd,
+ t.target_id,
+ string_agg(
+ substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
+ ','
+ ORDER BY t.time_bucket
+ ) FILTER (WHERE t.track_geom IS NOT NULL) as all_coords
+ FROM signal.t_vessel_tracks_5min t
+ INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
+ WHERE t.time_bucket >= rv.hour_bucket
+ AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
+ AND t.track_geom IS NOT NULL
+ AND public.ST_NPoints(t.track_geom) > 0
+ GROUP BY t.sig_src_cd, t.target_id
+)
+SELECT
+ '=== GEOMETRY CREATION TEST ===' as section,
+ sig_src_cd,
+ target_id,
+ all_coords IS NOT NULL as has_coords,
+ LENGTH(all_coords) as coords_length,
+ public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326) as merged_geom,
+ public.ST_NPoints(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as merged_points,
+ public.ST_IsValid(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as is_valid
+FROM merged_coords;
+
+-- 5. 전체 집계 쿼리 실행 (실제 HourlyTrackProcessor와 동일)
+WITH recent_vessel AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
+ LIMIT 1
+),
+ordered_tracks AS (
+ SELECT t.*
+ FROM signal.t_vessel_tracks_5min t
+ INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
+ WHERE t.time_bucket >= rv.hour_bucket
+ AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
+ AND t.track_geom IS NOT NULL
+ AND public.ST_NPoints(t.track_geom) > 0
+ ORDER BY t.time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ rv.hour_bucket as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+ CROSS JOIN recent_vessel rv
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ '=== FULL AGGREGATION RESULT ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(merged_geom) as merged_points,
+ public.ST_IsValid(merged_geom) as is_valid,
+ total_distance,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points,
+ start_time,
+ end_time,
+ time_diff_seconds
+FROM calculated_tracks;
+
+-- 6. 에러 발생 가능성 체크
+WITH recent_vessel AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
+ LIMIT 1
+)
+SELECT
+ '=== ERROR CHECK ===' as section,
+ COUNT(*) as total_tracks,
+ COUNT(CASE WHEN track_geom IS NULL THEN 1 END) as null_geom_count,
+ COUNT(CASE WHEN NOT public.ST_IsValid(track_geom) THEN 1 END) as invalid_geom_count,
+ COUNT(CASE WHEN public.ST_NPoints(track_geom) = 0 THEN 1 END) as zero_points_count,
+ COUNT(CASE WHEN public.ST_NPoints(track_geom) = 1 THEN 1 END) as single_point_count,
+ COUNT(CASE WHEN
+ substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)') IS NULL
+ THEN 1 END) as regex_fail_count
+FROM signal.t_vessel_tracks_5min t
+INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
+WHERE t.time_bucket >= rv.hour_bucket
+ AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour';
+
+-- ========================================
+-- 사용 방법:
+-- 1. 그냥 전체 스크립트 실행
+-- 2. 자동으로 최근 선박 선택됨
+-- 3. 각 섹션별 결과 확인
+--
+-- 에러 발생시 확인 사항:
+-- - "ERROR CHECK" 섹션에서 이상값 확인
+-- - "STRING_AGG RESULT"에서 all_coords 확인
+-- - "GEOMETRY CREATION TEST"에서 is_valid 확인
+-- ========================================
diff --git a/scripts/run-load-test.sh b/scripts/run-load-test.sh
new file mode 100644
index 0000000..b9e0246
--- /dev/null
+++ b/scripts/run-load-test.sh
@@ -0,0 +1,288 @@
+#!/bin/bash
+
+# 선박 궤적 집계 시스템 부하 테스트 실행 스크립트
+# 실행 전 JMeter가 설치되어 있어야 합니다.
+
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
+JMETER_HOME="${JMETER_HOME:-/opt/jmeter}"
+RESULTS_DIR="$PROJECT_ROOT/load-test-results"
+TIMESTAMP=$(date +%Y%m%d_%H%M%S)
+
+# 색상 정의
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+# 함수: 메시지 출력
+log_info() {
+ echo -e "${GREEN}[INFO]${NC} $1"
+}
+
+log_warn() {
+ echo -e "${YELLOW}[WARN]${NC} $1"
+}
+
+log_error() {
+ echo -e "${RED}[ERROR]${NC} $1"
+}
+
+# JMeter 설치 확인
+check_jmeter() {
+ if [ ! -d "$JMETER_HOME" ]; then
+ log_error "JMeter가 설치되어 있지 않습니다. JMETER_HOME을 설정하세요."
+ exit 1
+ fi
+
+ if [ ! -f "$JMETER_HOME/bin/jmeter" ]; then
+ log_error "JMeter 실행 파일을 찾을 수 없습니다: $JMETER_HOME/bin/jmeter"
+ exit 1
+ fi
+
+ log_info "JMeter 경로: $JMETER_HOME"
+}
+
+# 결과 디렉토리 생성
+create_results_dir() {
+ mkdir -p "$RESULTS_DIR/$TIMESTAMP"
+ log_info "결과 디렉토리 생성: $RESULTS_DIR/$TIMESTAMP"
+}
+
+# 시스템 상태 모니터링 시작
+start_monitoring() {
+ log_info "시스템 모니터링 시작..."
+
+ # CPU, 메모리, 네트워크 사용률 모니터링
+ nohup vmstat 5 > "$RESULTS_DIR/$TIMESTAMP/vmstat.log" 2>&1 &
+ VMSTAT_PID=$!
+
+ nohup iostat -x 5 > "$RESULTS_DIR/$TIMESTAMP/iostat.log" 2>&1 &
+ IOSTAT_PID=$!
+
+ # 데이터베이스 연결 모니터링
+ nohup watch -n 5 "psql -h 10.26.252.48 -U mdauser -d mdadb -c 'SELECT count(*) FROM pg_stat_activity;'" > "$RESULTS_DIR/$TIMESTAMP/db_connections.log" 2>&1 &
+ DB_MON_PID=$!
+
+ echo "$VMSTAT_PID $IOSTAT_PID $DB_MON_PID" > "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
+}
+
+# 시스템 모니터링 중지
+stop_monitoring() {
+ log_info "시스템 모니터링 중지..."
+
+ if [ -f "$RESULTS_DIR/$TIMESTAMP/monitoring.pids" ]; then
+ while read pid; do
+ kill $pid 2>/dev/null
+ done < "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
+ rm "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
+ fi
+}
+
+# JMeter 테스트 실행
+run_jmeter_test() {
+ local test_file=$1
+ local test_name=$(basename "$test_file" .jmx)
+
+ log_info "JMeter 테스트 실행: $test_name"
+
+ # JMeter 실행
+ "$JMETER_HOME/bin/jmeter" \
+ -n \
+ -t "$test_file" \
+ -l "$RESULTS_DIR/$TIMESTAMP/${test_name}-results.jtl" \
+ -e \
+ -o "$RESULTS_DIR/$TIMESTAMP/${test_name}-report" \
+ -Jjmeter.save.saveservice.output_format=csv \
+ -Jjmeter.save.saveservice.assertion_results_failure_message=true \
+ -Jjmeter.save.saveservice.data_type=true \
+ -Jjmeter.save.saveservice.label=true \
+ -Jjmeter.save.saveservice.response_code=true \
+ -Jjmeter.save.saveservice.response_data.on_error=true \
+ -Jjmeter.save.saveservice.response_message=true \
+ -Jjmeter.save.saveservice.successful=true \
+ -Jjmeter.save.saveservice.thread_name=true \
+ -Jjmeter.save.saveservice.time=true \
+ -Jjmeter.save.saveservice.connect_time=true \
+ -Jjmeter.save.saveservice.latency=true \
+ -Jjmeter.save.saveservice.bytes=true \
+ -Jjmeter.save.saveservice.sent_bytes=true \
+ -Jjmeter.save.saveservice.url=true
+
+ if [ $? -eq 0 ]; then
+ log_info "테스트 완료: $test_name"
+ log_info "결과 파일: $RESULTS_DIR/$TIMESTAMP/${test_name}-results.jtl"
+ log_info "HTML 리포트: $RESULTS_DIR/$TIMESTAMP/${test_name}-report/index.html"
+ else
+ log_error "테스트 실패: $test_name"
+ return 1
+ fi
+}
+
+# WebSocket 부하 테스트
+run_websocket_test() {
+ log_info "WebSocket 부하 테스트 준비..."
+
+ # Python 스크립트로 WebSocket 테스트 실행
+ cat > "$RESULTS_DIR/$TIMESTAMP/websocket_load_test.py" << 'EOF'
+import asyncio
+import websockets
+import json
+import time
+from datetime import datetime, timedelta
+import statistics
+
+class WebSocketLoadTester:
+ def __init__(self, base_url, num_clients, queries_per_client):
+ self.base_url = base_url
+ self.num_clients = num_clients
+ self.queries_per_client = queries_per_client
+ self.metrics = {
+ 'total_queries': 0,
+ 'successful_queries': 0,
+ 'failed_queries': 0,
+ 'latencies': [],
+ 'throughput': []
+ }
+
+ async def client_session(self, client_id):
+ async with websockets.connect(f"{self.base_url}/ws-tracks") as websocket:
+ for query_id in range(self.queries_per_client):
+ try:
+ # 쿼리 요청 생성
+ query = {
+ "startTime": (datetime.now() - timedelta(days=7)).isoformat(),
+ "endTime": datetime.now().isoformat(),
+ "viewport": {
+ "minLon": 124.0,
+ "maxLon": 132.0,
+ "minLat": 33.0,
+ "maxLat": 38.0
+ },
+ "chunkSize": 1000
+ }
+
+ start_time = time.time()
+ await websocket.send(json.dumps(query))
+
+ # 응답 수신
+ chunks_received = 0
+ while True:
+ response = await websocket.recv()
+ data = json.loads(response)
+ chunks_received += 1
+
+ if data.get('isLastChunk', False):
+ break
+
+ end_time = time.time()
+ latency = (end_time - start_time) * 1000 # ms
+
+ self.metrics['latencies'].append(latency)
+ self.metrics['successful_queries'] += 1
+
+ print(f"Client {client_id} - Query {query_id}: {latency:.2f}ms, {chunks_received} chunks")
+
+ except Exception as e:
+ print(f"Client {client_id} - Query {query_id} failed: {str(e)}")
+ self.metrics['failed_queries'] += 1
+
+ self.metrics['total_queries'] += 1
+ await asyncio.sleep(1) # 쿼리 간 딜레이
+
+ async def run_test(self):
+ print(f"Starting WebSocket load test with {self.num_clients} clients...")
+ start_time = time.time()
+
+ # 모든 클라이언트 동시 실행
+ tasks = []
+ for i in range(self.num_clients):
+ task = asyncio.create_task(self.client_session(i))
+ tasks.append(task)
+
+ await asyncio.gather(*tasks)
+
+ end_time = time.time()
+ total_duration = end_time - start_time
+
+ # 결과 분석
+ print("\n=== 부하 테스트 결과 ===")
+ print(f"총 실행 시간: {total_duration:.2f}초")
+ print(f"총 쿼리 수: {self.metrics['total_queries']}")
+ print(f"성공: {self.metrics['successful_queries']}")
+ print(f"실패: {self.metrics['failed_queries']}")
+
+ if self.metrics['latencies']:
+ print(f"평균 레이턴시: {statistics.mean(self.metrics['latencies']):.2f}ms")
+ print(f"최소 레이턴시: {min(self.metrics['latencies']):.2f}ms")
+ print(f"최대 레이턴시: {max(self.metrics['latencies']):.2f}ms")
+ print(f"중앙값 레이턴시: {statistics.median(self.metrics['latencies']):.2f}ms")
+
+ print(f"처리량: {self.metrics['total_queries'] / total_duration:.2f} queries/sec")
+
+if __name__ == "__main__":
+ tester = WebSocketLoadTester(
+ base_url="ws://10.26.252.48:8090",
+ num_clients=10,
+ queries_per_client=5
+ )
+ asyncio.run(tester.run_test())
+EOF
+
+ # Python WebSocket 테스트 실행
+ if command -v python3 &> /dev/null; then
+ python3 "$RESULTS_DIR/$TIMESTAMP/websocket_load_test.py" > "$RESULTS_DIR/$TIMESTAMP/websocket_test_results.log" 2>&1
+ else
+ log_warn "Python3가 설치되어 있지 않아 WebSocket 테스트를 건너뜁니다."
+ fi
+}
+
+# 메인 실행 함수
+main() {
+ log_info "선박 궤적 집계 시스템 부하 테스트 시작"
+ log_info "타임스탬프: $TIMESTAMP"
+
+ # JMeter 확인
+ check_jmeter
+
+ # 결과 디렉토리 생성
+ create_results_dir
+
+ # 시스템 모니터링 시작
+ start_monitoring
+
+ # 애플리케이션 상태 확인
+ log_info "애플리케이션 상태 확인..."
+ curl -s "http://10.26.252.48:8090/actuator/health" > "$RESULTS_DIR/$TIMESTAMP/app_health_before.json"
+
+ # JMeter 테스트 실행
+ if [ -f "$PROJECT_ROOT/src/main/resources/jmeter/comprehensive-load-test.jmx" ]; then
+ run_jmeter_test "$PROJECT_ROOT/src/main/resources/jmeter/comprehensive-load-test.jmx"
+ fi
+
+ # WebSocket 테스트 실행
+ run_websocket_test
+
+ # 10분간 부하 테스트 실행
+ log_info "부하 테스트 진행 중... (10분)"
+ sleep 600
+
+ # 시스템 모니터링 중지
+ stop_monitoring
+
+ # 최종 애플리케이션 상태 확인
+ curl -s "http://10.26.252.48:8090/actuator/health" > "$RESULTS_DIR/$TIMESTAMP/app_health_after.json"
+
+ # 결과 요약
+ log_info "부하 테스트 완료!"
+ log_info "결과 디렉토리: $RESULTS_DIR/$TIMESTAMP"
+
+ # 간단한 결과 분석
+ if [ -f "$RESULTS_DIR/$TIMESTAMP/comprehensive-load-test-results.jtl" ]; then
+ log_info "JMeter 결과 요약:"
+ awk -F',' 'NR>1 {sum+=$2; count++} END {print "평균 응답 시간: " sum/count " ms"}' "$RESULTS_DIR/$TIMESTAMP/comprehensive-load-test-results.jtl"
+ fi
+}
+
+# 스크립트 실행
+main "$@"
diff --git a/scripts/run-on-query-server-dev.sh b/scripts/run-on-query-server-dev.sh
new file mode 100644
index 0000000..1221441
--- /dev/null
+++ b/scripts/run-on-query-server-dev.sh
@@ -0,0 +1,190 @@
+#!/bin/bash
+
+# Query DB 서버에서 최적화된 실행 스크립트
+# Rocky Linux 환경에 맞춰 조정됨
+# Java 17 경로 명시적 지정
+
+# 애플리케이션 경로
+APP_HOME="/devdata/apps/bridge-db-monitoring"
+JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
+
+# Java 17 경로
+JAVA_HOME="/devdata/apps/jdk-17.0.8"
+JAVA_BIN="$JAVA_HOME/bin/java"
+
+# 로그 디렉토리
+LOG_DIR="$APP_HOME/logs"
+mkdir -p $LOG_DIR
+
+echo "================================================"
+echo "Vessel Batch Aggregation - Query Server Edition"
+echo "Start Time: $(date)"
+echo "================================================"
+
+# 경로 확인
+echo "Environment Check:"
+echo "- App Home: $APP_HOME"
+echo "- JAR File: $JAR_FILE"
+echo "- Java Path: $JAVA_BIN"
+echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
+
+# JAR 파일 존재 확인
+if [ ! -f "$JAR_FILE" ]; then
+ echo "ERROR: JAR file not found at $JAR_FILE"
+ exit 1
+fi
+
+# Java 실행 파일 확인
+if [ ! -x "$JAVA_BIN" ]; then
+ echo "ERROR: Java not found or not executable at $JAVA_BIN"
+ exit 1
+fi
+
+# 서버 정보 확인
+echo ""
+echo "Server Info:"
+echo "- Hostname: $(hostname)"
+echo "- CPU Cores: $(nproc)"
+echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
+echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
+
+# 환경 변수 설정 (localhost 최적화)
+export SPRING_PROFILES_ACTIVE=prod
+
+# Query DB와 Batch Meta DB를 localhost로 오버라이드
+export SPRING_DATASOURCE_QUERY_JDBC_URL="jdbc:postgresql://10.29.17.90:5432/mpcdb2?options=-csearch_path=signal,public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
+export SPRING_DATASOURCE_BATCH_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
+
+# 서버 CPU 코어 수에 따른 병렬 처리 조정
+CPU_CORES=$(nproc)
+export VESSEL_BATCH_PARTITION_SIZE=$((CPU_CORES * 2))
+export VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS=$((CPU_CORES / 2))
+
+echo ""
+echo "Optimized Settings:"
+echo "- Partition Size: $VESSEL_BATCH_PARTITION_SIZE"
+echo "- Parallel Threads: $VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS"
+echo "- Query DB: localhost (optimized)"
+echo "- Batch Meta DB: localhost (optimized)"
+
+# JVM 옵션 (서버 메모리에 맞게 조정)
+TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
+JVM_HEAP=$((TOTAL_MEM / 4)) # 전체 메모리의 25% 사용
+
+# 최소 16GB, 최대 64GB로 제한
+if [ $JVM_HEAP -lt 16 ]; then
+ JVM_HEAP=16
+elif [ $JVM_HEAP -gt 64 ]; then
+ JVM_HEAP=64
+fi
+
+JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
+ -XX:+UseG1GC \
+ -XX:G1HeapRegionSize=32m \
+ -XX:MaxGCPauseMillis=200 \
+ -XX:InitiatingHeapOccupancyPercent=35 \
+ -XX:G1ReservePercent=15 \
+ -XX:+UseStringDeduplication \
+ -XX:+ParallelRefProcEnabled \
+ -XX:+ExplicitGCInvokesConcurrent \
+ -XX:ParallelGCThreads=$((CPU_CORES / 2)) \
+ -XX:ConcGCThreads=$((CPU_CORES / 4)) \
+ -XX:MaxMetaspaceSize=512m \
+ -XX:+HeapDumpOnOutOfMemoryError \
+ -XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
+ -Xlog:gc*:file=$LOG_DIR/gc.log:time,uptime,level,tags:filecount=5,filesize=100M \
+ -Dfile.encoding=UTF-8 \
+ -Duser.timezone=Asia/Seoul \
+ -Djava.security.egd=file:/dev/./urandom \
+ -Dspring.profiles.active=prod"
+
+echo "- JVM Heap Size: ${JVM_HEAP}GB"
+
+# 기존 프로세스 확인 및 종료
+echo ""
+echo "Checking for existing process..."
+PID=$(pgrep -f "$JAR_FILE")
+if [ ! -z "$PID" ]; then
+ echo "Stopping existing process (PID: $PID)..."
+ kill -15 $PID
+
+ # 프로세스 종료 대기 (최대 30초)
+ for i in {1..30}; do
+ if ! kill -0 $PID 2>/dev/null; then
+ echo "Process stopped successfully."
+ break
+ fi
+ if [ $i -eq 30 ]; then
+ echo "Force killing process..."
+ kill -9 $PID
+ fi
+ sleep 1
+ done
+fi
+
+# 작업 디렉토리로 이동
+cd $APP_HOME
+
+# 애플리케이션 실행 (nice로 우선순위 조정)
+echo ""
+echo "Starting application with reduced priority..."
+echo "Command: nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
+echo ""
+
+# nohup으로 백그라운드 실행
+nohup nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
+ > $LOG_DIR/app.log 2>&1 &
+
+NEW_PID=$!
+echo "Application started with PID: $NEW_PID"
+
+# PID 파일 생성
+echo $NEW_PID > $APP_HOME/vessel-batch.pid
+
+# 시작 확인 (30초 대기)
+echo "Waiting for application startup..."
+STARTUP_SUCCESS=false
+for i in {1..30}; do
+ if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
+ echo "✅ Application started successfully!"
+ STARTUP_SUCCESS=true
+ break
+ fi
+ echo -n "."
+ sleep 1
+done
+
+if [ "$STARTUP_SUCCESS" = false ]; then
+ echo ""
+ echo "⚠️ Application startup timeout. Check logs for errors."
+ echo "Log file: $LOG_DIR/app.log"
+ tail -20 $LOG_DIR/app.log
+fi
+
+echo ""
+echo "================================================"
+echo "Deployment Complete!"
+echo "- PID: $NEW_PID"
+echo "- PID File: $APP_HOME/vessel-batch.pid"
+echo "- Log: $LOG_DIR/app.log"
+echo "- Monitor: tail -f $LOG_DIR/app.log"
+echo "================================================"
+
+# 초기 상태 확인
+sleep 5
+echo ""
+echo "Initial Status Check:"
+curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
+
+# 리소스 사용량 표시
+echo ""
+echo "Resource Usage:"
+ps aux | grep $NEW_PID | grep -v grep
+
+# 빠른 명령어 안내
+echo ""
+echo "Useful Commands:"
+echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-batch.pid)"
+echo "- Logs: tail -f $LOG_DIR/app.log"
+echo "- Status: curl http://localhost:8090/actuator/health"
+echo "- Monitor: $APP_HOME/monitor-query-server.sh"
diff --git a/scripts/run-query-only-server.sh b/scripts/run-query-only-server.sh
new file mode 100644
index 0000000..0fd0c6d
--- /dev/null
+++ b/scripts/run-query-only-server.sh
@@ -0,0 +1,184 @@
+#!/bin/bash
+
+# Query 전용 서버 실행 스크립트 (10.29.17.90)
+# 배치 Job 없이 조회 API만 제공
+# Java 17 경로 명시적 지정
+
+# 애플리케이션 경로
+APP_HOME="/devdata/apps/bridge-db-monitoring"
+JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
+
+# Java 17 경로
+JAVA_HOME="/devdata/apps/jdk-17.0.8"
+JAVA_BIN="$JAVA_HOME/bin/java"
+
+# 로그 디렉토리
+LOG_DIR="$APP_HOME/logs"
+mkdir -p $LOG_DIR
+
+echo "================================================"
+echo "Vessel Query API Server - Query Only Mode"
+echo "Start Time: $(date)"
+echo "================================================"
+
+# 경로 확인
+echo "Environment Check:"
+echo "- App Home: $APP_HOME"
+echo "- JAR File: $JAR_FILE"
+echo "- Java Path: $JAVA_BIN"
+echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
+
+# JAR 파일 존재 확인
+if [ ! -f "$JAR_FILE" ]; then
+ echo "ERROR: JAR file not found at $JAR_FILE"
+ exit 1
+fi
+
+# Java 실행 파일 확인
+if [ ! -x "$JAVA_BIN" ]; then
+ echo "ERROR: Java not found or not executable at $JAVA_BIN"
+ exit 1
+fi
+
+# 서버 정보 확인
+echo ""
+echo "Server Info:"
+echo "- Hostname: $(hostname)"
+echo "- CPU Cores: $(nproc)"
+echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
+echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
+
+# 환경 변수 설정 (query 프로파일 - 배치 비활성화!)
+export SPRING_PROFILES_ACTIVE=query
+
+echo ""
+echo "Profile Settings:"
+echo "- Active Profile: QUERY (Batch Jobs Disabled)"
+echo "- Query DB: 10.29.17.90:5432/mpcdb2 (Local DB)"
+echo "- Batch Jobs: DISABLED"
+echo "- Scheduler: DISABLED"
+
+# JVM 옵션 (서버 메모리에 맞게 조정)
+TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
+JVM_HEAP=$((TOTAL_MEM / 8)) # 전체 메모리의 12.5% 사용 (배치 없으므로 적게)
+
+# 최소 4GB, 최대 16GB로 제한
+if [ $JVM_HEAP -lt 4 ]; then
+ JVM_HEAP=4
+elif [ $JVM_HEAP -gt 16 ]; then
+ JVM_HEAP=16
+fi
+
+CPU_CORES=$(nproc)
+
+JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
+ -XX:+UseG1GC \
+ -XX:G1HeapRegionSize=32m \
+ -XX:MaxGCPauseMillis=200 \
+ -XX:InitiatingHeapOccupancyPercent=35 \
+ -XX:G1ReservePercent=15 \
+ -XX:+UseStringDeduplication \
+ -XX:+ParallelRefProcEnabled \
+ -XX:+ExplicitGCInvokesConcurrent \
+ -XX:ParallelGCThreads=$((CPU_CORES / 2)) \
+ -XX:ConcGCThreads=$((CPU_CORES / 4)) \
+ -XX:MaxMetaspaceSize=512m \
+ -XX:+HeapDumpOnOutOfMemoryError \
+ -XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
+ -Xlog:gc*:file=$LOG_DIR/gc.log:time,uptime,level,tags:filecount=5,filesize=100M \
+ -Dfile.encoding=UTF-8 \
+ -Duser.timezone=Asia/Seoul \
+ -Djava.security.egd=file:/dev/./urandom \
+ -Dspring.profiles.active=query"
+
+echo "- JVM Heap Size: ${JVM_HEAP}GB"
+
+# 기존 프로세스 확인 및 종료
+echo ""
+echo "Checking for existing process..."
+PID=$(pgrep -f "$JAR_FILE")
+if [ ! -z "$PID" ]; then
+ echo "Stopping existing process (PID: $PID)..."
+ kill -15 $PID
+
+ # 프로세스 종료 대기 (최대 30초)
+ for i in {1..30}; do
+ if ! kill -0 $PID 2>/dev/null; then
+ echo "Process stopped successfully."
+ break
+ fi
+ if [ $i -eq 30 ]; then
+ echo "Force killing process..."
+ kill -9 $PID
+ fi
+ sleep 1
+ done
+fi
+
+# 작업 디렉토리로 이동
+cd $APP_HOME
+
+# 애플리케이션 실행
+echo ""
+echo "Starting application in QUERY-ONLY mode..."
+echo "Command: $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
+echo ""
+
+# nohup으로 백그라운드 실행
+nohup $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
+ > $LOG_DIR/app.log 2>&1 &
+
+NEW_PID=$!
+echo "Application started with PID: $NEW_PID"
+
+# PID 파일 생성
+echo $NEW_PID > $APP_HOME/vessel-query.pid
+
+# 시작 확인 (30초 대기)
+echo "Waiting for application startup..."
+STARTUP_SUCCESS=false
+for i in {1..30}; do
+ if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
+ echo "✅ Application started successfully!"
+ STARTUP_SUCCESS=true
+ break
+ fi
+ echo -n "."
+ sleep 1
+done
+
+if [ "$STARTUP_SUCCESS" = false ]; then
+ echo ""
+ echo "⚠️ Application startup timeout. Check logs for errors."
+ echo "Log file: $LOG_DIR/app.log"
+ tail -20 $LOG_DIR/app.log
+fi
+
+echo ""
+echo "================================================"
+echo "Deployment Complete!"
+echo "- Mode: QUERY ONLY (No Batch Jobs)"
+echo "- PID: $NEW_PID"
+echo "- PID File: $APP_HOME/vessel-query.pid"
+echo "- Log: $LOG_DIR/app.log"
+echo "- Monitor: tail -f $LOG_DIR/app.log"
+echo "================================================"
+
+# 초기 상태 확인
+sleep 5
+echo ""
+echo "Initial Status Check:"
+curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
+
+# 리소스 사용량 표시
+echo ""
+echo "Resource Usage:"
+ps aux | grep $NEW_PID | grep -v grep
+
+# 빠른 명령어 안내
+echo ""
+echo "Useful Commands:"
+echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-query.pid)"
+echo "- Logs: tail -f $LOG_DIR/app.log"
+echo "- Status: curl http://localhost:8090/actuator/health"
+echo "- API Test: curl http://localhost:8090/api/gis/areas"
diff --git a/scripts/server-logs.bat b/scripts/server-logs.bat
new file mode 100644
index 0000000..9dd28fd
--- /dev/null
+++ b/scripts/server-logs.bat
@@ -0,0 +1,40 @@
+@echo off
+chcp 65001 >nul
+REM ===============================================
+REM Signal Batch Server Log Viewer
+REM ===============================================
+
+setlocal
+
+set SERVER_IP=10.26.252.48
+set SERVER_USER=root
+set SERVER_PATH=/devdata/apps/bridge-db-monitoring
+
+echo ===============================================
+echo Signal Batch Server Log Viewer
+echo ===============================================
+echo Server: %SERVER_IP%
+echo Time: %date% %time%
+echo.
+
+if "%1"=="tail" (
+ echo Starting real-time log monitoring... (Ctrl+C to exit)
+ ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh logs"
+) else if "%1"=="errors" (
+ echo Retrieving recent error logs...
+ ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh errors"
+) else if "%1"=="stats" (
+ echo Retrieving performance statistics...
+ ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh stats"
+) else (
+ echo Usage:
+ echo server-logs.bat - Show recent 50 lines
+ echo server-logs.bat tail - Real-time log monitoring
+ echo server-logs.bat errors - Show error logs only
+ echo server-logs.bat stats - Show performance statistics
+ echo.
+ echo Recent 50 lines of log:
+ ssh %SERVER_USER%@%SERVER_IP% "tail -50 %SERVER_PATH%/logs/app.log 2>/dev/null || echo 'Log file not available'"
+)
+
+endlocal
\ No newline at end of file
diff --git a/scripts/server-status.bat b/scripts/server-status.bat
new file mode 100644
index 0000000..d72031c
--- /dev/null
+++ b/scripts/server-status.bat
@@ -0,0 +1,64 @@
+@echo off
+chcp 65001 >nul
+REM ===============================================
+REM Signal Batch Server Status Checker
+REM ===============================================
+
+setlocal enabledelayedexpansion
+
+REM Configuration
+set "SERVER_IP=10.26.252.48"
+set "SERVER_USER=root"
+set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
+
+echo ===============================================
+echo Signal Batch Server Status
+echo ===============================================
+echo [INFO] Query Time: !date! !time!
+echo [INFO] Target Server: !SERVER_IP!
+
+REM 1. Server Connection Test
+echo.
+echo =============== Server Connection Test ===============
+ssh !SERVER_USER!@!SERVER_IP! "echo 'Server connection OK'" 2>nul
+set CONNECTION_RESULT=!ERRORLEVEL!
+if !CONNECTION_RESULT! neq 0 (
+ echo [ERROR] Server connection failed
+ exit /b 1
+)
+echo [INFO] Server connection successful
+
+REM 2. Application Status
+echo.
+echo =============== Application Status ===============
+ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
+
+REM 3. Additional Status Information
+echo.
+echo =============== Additional Status Information ===============
+
+REM Health Check
+echo [INFO] Health Check:
+ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/health --max-time 5 2>/dev/null | python -m json.tool 2>/dev/null || echo 'Health endpoint not available'"
+
+echo.
+REM Metrics Information
+echo [INFO] Metrics Information:
+ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/metrics --max-time 5 2>/dev/null | head -20 || echo 'Metrics endpoint not available'"
+
+echo.
+REM Disk Usage
+echo [INFO] Disk Usage:
+ssh !SERVER_USER!@!SERVER_IP! "df -h !SERVER_PATH!"
+
+echo.
+REM Memory Usage
+echo [INFO] Memory Usage:
+ssh !SERVER_USER!@!SERVER_IP! "free -h"
+
+echo.
+REM Recent Log Check
+echo [INFO] Recent Logs (last 10 lines):
+ssh !SERVER_USER!@!SERVER_IP! "tail -10 !SERVER_PATH!/logs/app.log 2>/dev/null || echo 'Log file not available'"
+
+endlocal
\ No newline at end of file
diff --git a/scripts/setup-ssh-key.bat b/scripts/setup-ssh-key.bat
new file mode 100644
index 0000000..18c9c8c
--- /dev/null
+++ b/scripts/setup-ssh-key.bat
@@ -0,0 +1,59 @@
+@echo off
+chcp 65001 >nul
+echo ===============================================
+echo SSH Key Setup for Server Deployment
+echo ===============================================
+
+set "SERVER_IP=10.26.252.51"
+set "SERVER_USER=root"
+
+echo [INFO] Setting up SSH key authentication for %SERVER_USER%@%SERVER_IP%
+echo.
+
+REM Check if SSH key exists
+if not exist "%USERPROFILE%\.ssh\id_rsa.pub" (
+ echo [INFO] SSH key not found. Generating new SSH key...
+ ssh-keygen -t rsa -b 4096 -f "%USERPROFILE%\.ssh\id_rsa" -N ""
+ if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Failed to generate SSH key
+ pause
+ exit /b 1
+ )
+ echo [SUCCESS] SSH key generated
+)
+
+echo.
+echo [INFO] Copying SSH key to server...
+echo [INFO] You will be prompted for the server password
+echo.
+
+type "%USERPROFILE%\.ssh\id_rsa.pub" | ssh %SERVER_USER%@%SERVER_IP% "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys && echo '[SUCCESS] SSH key installed'"
+
+if !ERRORLEVEL! neq 0 (
+ echo [ERROR] Failed to copy SSH key
+ echo.
+ echo Please ensure:
+ echo - Server is accessible at %SERVER_IP%
+ echo - You have the correct password for %SERVER_USER%
+ echo - SSH service is running on the server
+ pause
+ exit /b 1
+)
+
+echo.
+echo ===============================================
+echo [SUCCESS] SSH Key Setup Complete!
+echo ===============================================
+echo.
+echo Testing connection...
+ssh -o BatchMode=yes -o ConnectTimeout=10 %SERVER_USER%@%SERVER_IP% "echo '[SUCCESS] SSH key authentication working!'"
+
+if !ERRORLEVEL! equ 0 (
+ echo.
+ echo You can now run deploy-only.bat without password
+) else (
+ echo [WARN] Key authentication test failed
+ echo Please try running this script again
+)
+
+pause
\ No newline at end of file
diff --git a/scripts/stop-running-jobs.sql b/scripts/stop-running-jobs.sql
new file mode 100644
index 0000000..ea9097c
--- /dev/null
+++ b/scripts/stop-running-jobs.sql
@@ -0,0 +1,67 @@
+-- 실행 중인(STARTED) 배치 Job과 Step을 강제 종료
+-- 주의: 실제로 실행 중인 프로세스를 종료하지는 않습니다.
+-- DB 상태만 변경하므로, 애플리케이션을 먼저 중지한 후 사용하세요.
+
+-- 1. 현재 실행 중인 Job 확인
+SELECT
+ '=== RUNNING JOBS ===' as status,
+ JOB_EXECUTION_ID,
+ JOB_INSTANCE_ID,
+ START_TIME,
+ STATUS,
+ (SELECT JOB_NAME FROM BATCH_JOB_INSTANCE WHERE JOB_INSTANCE_ID = bje.JOB_INSTANCE_ID) as JOB_NAME
+FROM BATCH_JOB_EXECUTION bje
+WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING')
+ORDER BY START_TIME DESC;
+
+-- 2. 실행 중인 Step 확인
+SELECT
+ '=== RUNNING STEPS ===' as status,
+ bse.STEP_EXECUTION_ID,
+ bse.JOB_EXECUTION_ID,
+ bse.STEP_NAME,
+ bse.STATUS,
+ bse.START_TIME
+FROM BATCH_STEP_EXECUTION bse
+WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING')
+ORDER BY START_TIME DESC;
+
+-- 3. 실행 중인 Step을 STOPPED로 변경
+UPDATE BATCH_STEP_EXECUTION
+SET
+ STATUS = 'STOPPED',
+ EXIT_CODE = 'STOPPED',
+ EXIT_MESSAGE = 'Manually stopped - Original status: ' || STATUS,
+ END_TIME = CURRENT_TIMESTAMP,
+ LAST_UPDATED = CURRENT_TIMESTAMP
+WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
+
+-- 4. 실행 중인 Job을 STOPPED로 변경
+UPDATE BATCH_JOB_EXECUTION
+SET
+ STATUS = 'STOPPED',
+ EXIT_CODE = 'STOPPED',
+ EXIT_MESSAGE = 'Manually stopped - Original status: ' || STATUS,
+ END_TIME = CURRENT_TIMESTAMP,
+ LAST_UPDATED = CURRENT_TIMESTAMP
+WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
+
+-- 5. 결과 확인
+SELECT
+ '=== AFTER STOP ===' as status,
+ COUNT(*) as running_jobs
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
+
+SELECT
+ '=== STOPPED JOBS ===' as status,
+ JOB_EXECUTION_ID,
+ JOB_INSTANCE_ID,
+ START_TIME,
+ END_TIME,
+ STATUS,
+ EXIT_CODE
+FROM BATCH_JOB_EXECUTION
+WHERE STATUS = 'STOPPED'
+ORDER BY JOB_EXECUTION_ID DESC
+LIMIT 10;
diff --git a/scripts/sync-nexus.sh b/scripts/sync-nexus.sh
new file mode 100644
index 0000000..1be44a5
--- /dev/null
+++ b/scripts/sync-nexus.sh
@@ -0,0 +1,170 @@
+#!/bin/bash
+# =============================================================================
+# sync-nexus.sh - 로컬 Maven 의존성을 Nexus에 동기화
+#
+# 사용법:
+# ./scripts/sync-nexus.sh # 실제 업로드
+# ./scripts/sync-nexus.sh --dry-run # 업로드 대상만 확인
+# =============================================================================
+
+set -eo pipefail
+
+# --- SDKMAN 초기화 (set -u 전에 실행) ---
+if [ -f "$HOME/.sdkman/bin/sdkman-init.sh" ]; then
+ source "$HOME/.sdkman/bin/sdkman-init.sh" 2>/dev/null || true
+fi
+
+# --- 설정 ---
+NEXUS_URL="http://10.26.252.39:8081"
+REPO_ID="mda-backend-repository"
+NEXUS_USER="admin"
+NEXUS_PASS="8932"
+LOCAL_REPO="$HOME/.m2/repository"
+
+# --- 옵션 파싱 ---
+DRY_RUN=false
+if [[ "${1:-}" == "--dry-run" ]]; then
+ DRY_RUN=true
+ echo "=== DRY RUN 모드 (업로드하지 않음) ==="
+fi
+
+# --- 카운터 ---
+TOTAL=0
+SKIPPED=0
+UPLOADED=0
+FAILED=0
+
+# Nexus에 아티팩트 존재 여부 확인 (HTTP HEAD로 .pom 파일 체크)
+check_exists() {
+ local group_path=$1
+ local artifact_id=$2
+ local version=$3
+ local pom_url="${NEXUS_URL}/repository/${REPO_ID}/${group_path}/${artifact_id}/${version}/${artifact_id}-${version}.pom"
+ local http_code
+ http_code=$(curl -s -o /dev/null -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" --connect-timeout 5 "$pom_url" < /dev/null)
+ [[ "$http_code" == "200" ]]
+}
+
+# 파일 업로드 (HTTP PUT)
+upload_file() {
+ local file_path=$1
+ local remote_path=$2
+ local url="${NEXUS_URL}/repository/${REPO_ID}/${remote_path}"
+
+ if [ ! -f "$file_path" ]; then
+ return 1
+ fi
+
+ local http_code
+ http_code=$(curl -s -o /dev/null -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" --upload-file "$file_path" --connect-timeout 10 --max-time 120 "$url" < /dev/null)
+ [[ "$http_code" == "201" || "$http_code" == "200" ]]
+}
+
+# 아티팩트 업로드 (pom + jar + 기타)
+upload_artifact() {
+ local group_id=$1
+ local artifact_id=$2
+ local version=$3
+ local packaging=$4
+
+ local group_path
+ group_path=$(echo "$group_id" | tr '.' '/')
+ local base_dir="${LOCAL_REPO}/${group_path}/${artifact_id}/${version}"
+ local base_name="${artifact_id}-${version}"
+ local remote_base="${group_path}/${artifact_id}/${version}"
+
+ local success=true
+
+ # POM 업로드 (필수)
+ local pom_file="${base_dir}/${base_name}.pom"
+ if [ -f "$pom_file" ]; then
+ if upload_file "$pom_file" "${remote_base}/${base_name}.pom"; then
+ :
+ else
+ echo " [FAIL] POM 업로드 실패"
+ success=false
+ fi
+ fi
+
+ # JAR 업로드 (pom 패키징이 아닌 경우)
+ if [[ "$packaging" != "pom" ]]; then
+ local jar_file="${base_dir}/${base_name}.${packaging}"
+ if [ -f "$jar_file" ]; then
+ if upload_file "$jar_file" "${remote_base}/${base_name}.${packaging}"; then
+ :
+ else
+ echo " [FAIL] ${packaging} 업로드 실패"
+ success=false
+ fi
+ fi
+ fi
+
+ $success
+}
+
+echo ""
+echo "=== Nexus 동기화 시작 ==="
+echo " Nexus: ${NEXUS_URL}/repository/${REPO_ID}"
+echo " 로컬: ${LOCAL_REPO}"
+echo ""
+
+# Nexus 연결 확인
+if ! curl -s -o /dev/null -w "" -u "${NEXUS_USER}:${NEXUS_PASS}" --connect-timeout 5 "${NEXUS_URL}/service/rest/v1/repositories" 2>/dev/null; then
+ echo "[ERROR] Nexus(${NEXUS_URL})에 연결할 수 없습니다."
+ exit 1
+fi
+echo "[OK] Nexus 연결 확인"
+echo ""
+
+# Maven dependency:list로 GAV 목록 추출
+echo "의존성 목록 추출 중..."
+DEP_LIST=$(mvn dependency:list -DoutputAbsoluteArtifactFilename=true 2>/dev/null | grep "^\[INFO\] " | sed 's/\[INFO\] //' | sed 's/ -- .*//')
+
+echo ""
+echo "--- 동기화 진행 ---"
+
+while IFS= read -r line; do
+ # 형식: groupId:artifactId:packaging:version:scope:/path/to/file
+ IFS=':' read -r group_id artifact_id packaging version scope rest <<< "$line"
+
+ if [[ -z "$group_id" || -z "$artifact_id" || -z "$version" ]]; then
+ continue
+ fi
+
+ TOTAL=$((TOTAL + 1))
+ local_group_path=$(echo "$group_id" | tr '.' '/')
+
+ # Nexus 존재 여부 확인
+ if check_exists "$local_group_path" "$artifact_id" "$version"; then
+ SKIPPED=$((SKIPPED + 1))
+ continue
+ fi
+
+ # 신규 아티팩트 발견
+ echo "[NEW] ${group_id}:${artifact_id}:${version} (${packaging})"
+
+ if $DRY_RUN; then
+ UPLOADED=$((UPLOADED + 1))
+ else
+ if upload_artifact "$group_id" "$artifact_id" "$version" "$packaging"; then
+ echo " -> 업로드 완료"
+ UPLOADED=$((UPLOADED + 1))
+ else
+ echo " -> 업로드 실패"
+ FAILED=$((FAILED + 1))
+ fi
+ fi
+
+done <<< "$DEP_LIST"
+
+echo ""
+echo "=== 동기화 완료 ==="
+echo " 전체: ${TOTAL}"
+echo " 스킵 (이미 존재): ${SKIPPED}"
+if $DRY_RUN; then
+ echo " 업로드 대상: ${UPLOADED}"
+else
+ echo " 업로드 성공: ${UPLOADED}"
+ echo " 업로드 실패: ${FAILED}"
+fi
+echo ""
diff --git a/scripts/test-abnormal-tracks-insert.sql b/scripts/test-abnormal-tracks-insert.sql
new file mode 100644
index 0000000..5bf7282
--- /dev/null
+++ b/scripts/test-abnormal-tracks-insert.sql
@@ -0,0 +1,135 @@
+-- t_abnormal_tracks 테스트용 INSERT 쿼리
+-- PostGIS ST_GeomFromText 함수 테스트
+
+-- 1. 기본 테스트 (track_geom 컬럼 사용)
+INSERT INTO signal.t_abnormal_tracks (
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ track_geom,
+ abnormal_type,
+ abnormal_reason,
+ distance_nm,
+ avg_speed,
+ max_speed,
+ point_count,
+ source_table
+) VALUES (
+ 'AIS', -- sig_src_cd
+ 'TEST_VESSEL_001', -- target_id
+ '2025-10-10 12:00:00'::timestamp, -- time_bucket
+ ST_GeomFromText('LINESTRING M(126.0 37.0 1728547200, 126.1 37.1 1728547260)', 4326), -- track_geom (LineString M 타입)
+ 'EXCESSIVE_SPEED', -- abnormal_type
+ '{"reason": "Speed exceeds 200 knots", "detected_speed": 250.5}'::jsonb, -- abnormal_reason
+ 15.5, -- distance_nm
+ 180.3, -- avg_speed
+ 250.5, -- max_speed
+ 10, -- point_count
+ 'hourly' -- source_table
+)
+ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
+DO UPDATE SET
+ track_geom = EXCLUDED.track_geom,
+ abnormal_type = EXCLUDED.abnormal_type,
+ abnormal_reason = EXCLUDED.abnormal_reason,
+ distance_nm = EXCLUDED.distance_nm,
+ avg_speed = EXCLUDED.avg_speed,
+ max_speed = EXCLUDED.max_speed,
+ point_count = EXCLUDED.point_count,
+ detected_at = NOW();
+
+-- 2. track_geom_v2 컬럼을 사용하는 경우
+INSERT INTO signal.t_abnormal_tracks (
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ track_geom_v2,
+ abnormal_type,
+ abnormal_reason,
+ distance_nm,
+ avg_speed,
+ max_speed,
+ point_count,
+ source_table
+) VALUES (
+ 'LRIT', -- sig_src_cd
+ 'TEST_VESSEL_002', -- target_id
+ '2025-10-10 13:00:00'::timestamp, -- time_bucket
+ ST_GeomFromText('LINESTRING M(127.0 38.0 1728550800, 127.2 38.2 1728550860, 127.4 38.4 1728550920)', 4326), -- track_geom_v2
+ 'UNREALISTIC_DISTANCE', -- abnormal_type
+ '{"reason": "Distance too large for time interval", "distance_nm": 120.0, "time_interval_minutes": 5}'::jsonb, -- abnormal_reason
+ 120.0, -- distance_nm
+ 1440.0, -- avg_speed (120nm / 5min = 1440 knots)
+ 1500.0, -- max_speed
+ 3, -- point_count
+ '5min' -- source_table
+)
+ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
+DO UPDATE SET
+ track_geom_v2 = EXCLUDED.track_geom_v2,
+ abnormal_type = EXCLUDED.abnormal_type,
+ abnormal_reason = EXCLUDED.abnormal_reason,
+ distance_nm = EXCLUDED.distance_nm,
+ avg_speed = EXCLUDED.avg_speed,
+ max_speed = EXCLUDED.max_speed,
+ point_count = EXCLUDED.point_count,
+ detected_at = NOW();
+
+-- 3. public 스키마를 명시적으로 지정한 버전
+INSERT INTO signal.t_abnormal_tracks (
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ track_geom,
+ abnormal_type,
+ abnormal_reason,
+ distance_nm,
+ avg_speed,
+ max_speed,
+ point_count,
+ source_table
+) VALUES (
+ 'VPASS', -- sig_src_cd
+ 'TEST_VESSEL_003', -- target_id
+ '2025-10-10 14:00:00'::timestamp, -- time_bucket
+ public.ST_GeomFromText('LINESTRING M(128.0 36.0 1728554400, 128.1 36.1 1728554460)', 4326), -- public 스키마 명시
+ 'SUDDEN_DIRECTION_CHANGE', -- abnormal_type
+ '{"reason": "Unrealistic turn angle", "angle_degrees": 175}'::jsonb, -- abnormal_reason
+ 8.5, -- distance_nm
+ 102.0, -- avg_speed
+ 120.0, -- max_speed
+ 2, -- point_count
+ 'hourly' -- source_table
+)
+ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
+DO UPDATE SET
+ track_geom = EXCLUDED.track_geom,
+ abnormal_type = EXCLUDED.abnormal_type,
+ abnormal_reason = EXCLUDED.abnormal_reason,
+ distance_nm = EXCLUDED.distance_nm,
+ avg_speed = EXCLUDED.avg_speed,
+ max_speed = EXCLUDED.max_speed,
+ point_count = EXCLUDED.point_count,
+ detected_at = NOW();
+
+-- 4. 검증 쿼리
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ abnormal_type,
+ abnormal_reason,
+ distance_nm,
+ avg_speed,
+ max_speed,
+ point_count,
+ source_table,
+ ST_AsText(track_geom) as track_geom_wkt,
+ ST_AsText(track_geom_v2) as track_geom_v2_wkt,
+ detected_at
+FROM signal.t_abnormal_tracks
+WHERE target_id LIKE 'TEST_VESSEL_%'
+ORDER BY time_bucket DESC;
+
+-- 5. 정리 (테스트 데이터 삭제)
+-- DELETE FROM signal.t_abnormal_tracks WHERE target_id LIKE 'TEST_VESSEL_%';
diff --git a/scripts/test-daily-aggregation-fixed.sql b/scripts/test-daily-aggregation-fixed.sql
new file mode 100644
index 0000000..dcbb9b8
--- /dev/null
+++ b/scripts/test-daily-aggregation-fixed.sql
@@ -0,0 +1,496 @@
+-- ========================================
+-- 일별 집계 쿼리 검증 스크립트
+-- CAST 및 타입 호환성 테스트
+-- ========================================
+
+-- 1. 임시 테스트 테이블 생성
+DROP TABLE IF EXISTS test_vessel_tracks_hourly_for_daily CASCADE;
+DROP TABLE IF EXISTS test_vessel_tracks_daily CASCADE;
+
+CREATE TABLE test_vessel_tracks_hourly_for_daily (
+ sig_src_cd VARCHAR(10),
+ target_id VARCHAR(20),
+ time_bucket TIMESTAMP,
+ track_geom geometry(LineStringM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ start_position JSONB,
+ end_position JSONB,
+ PRIMARY KEY (sig_src_cd, target_id, time_bucket)
+);
+
+CREATE TABLE test_vessel_tracks_daily (
+ sig_src_cd VARCHAR(10),
+ target_id VARCHAR(20),
+ time_bucket TIMESTAMP,
+ track_geom geometry(LineStringM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ start_position JSONB,
+ end_position JSONB,
+ PRIMARY KEY (sig_src_cd, target_id, time_bucket)
+);
+
+-- 2. 샘플 데이터 삽입 (하루치 시간별 데이터)
+-- 시나리오 1: 정상 이동 선박 (24시간 중 일부)
+INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 00:00:00',
+ public.ST_GeomFromText('LINESTRING M(126.5 37.5 1736179200, 126.52 37.52 1736182800)', 4326),
+ 5.5,
+ 10.5,
+ 12.0,
+ 12,
+ '{"lat": 37.5, "lon": 126.5, "time": "2025-01-07 00:00:00", "sog": 10.5}'::jsonb,
+ '{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 01:00:00", "sog": 11.0}'::jsonb
+),
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 01:00:00',
+ public.ST_GeomFromText('LINESTRING M(126.52 37.52 1736182800, 126.54 37.54 1736186400)', 4326),
+ 6.0,
+ 11.0,
+ 13.0,
+ 12,
+ '{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 01:00:00", "sog": 11.0}'::jsonb,
+ '{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 02:00:00", "sog": 12.0}'::jsonb
+),
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 02:00:00',
+ public.ST_GeomFromText('LINESTRING M(126.54 37.54 1736186400, 126.56 37.56 1736190000)', 4326),
+ 5.8,
+ 10.8,
+ 12.5,
+ 12,
+ '{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 02:00:00", "sog": 10.8}'::jsonb,
+ '{"lat": 37.56, "lon": 126.56, "time": "2025-01-07 03:00:00", "sog": 11.5}'::jsonb
+),
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 03:00:00',
+ public.ST_GeomFromText('LINESTRING M(126.56 37.56 1736190000, 126.58 37.58 1736193600)', 4326),
+ 6.2,
+ 11.2,
+ 13.5,
+ 12,
+ '{"lat": 37.56, "lon": 126.56, "time": "2025-01-07 03:00:00", "sog": 11.2}'::jsonb,
+ '{"lat": 37.58, "lon": 126.58, "time": "2025-01-07 04:00:00", "sog": 12.5}'::jsonb
+);
+
+-- 시나리오 2: 정박 선박
+INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
+(
+ '000002',
+ 'TEST002',
+ '2025-01-07 00:00:00',
+ public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736179200, 129.0 35.0 1736182800)', 4326),
+ 0.0,
+ 0.0,
+ 0.5,
+ 24,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 01:00:00", "sog": 0.0}'::jsonb
+),
+(
+ '000002',
+ 'TEST002',
+ '2025-01-07 01:00:00',
+ public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736182800, 129.0 35.0 1736186400)', 4326),
+ 0.0,
+ 0.0,
+ 0.3,
+ 24,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 01:00:00", "sog": 0.0}'::jsonb,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 02:00:00", "sog": 0.0}'::jsonb
+);
+
+-- 시나리오 3: 단일 시간 데이터
+INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
+(
+ '000003',
+ 'TEST003',
+ '2025-01-07 00:00:00',
+ public.ST_GeomFromText('LINESTRING M(130.0 36.0 1736179200, 130.0 36.0 1736179200)', 4326),
+ 0.0,
+ 0.0,
+ 0.0,
+ 2,
+ '{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb,
+ '{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb
+);
+
+-- 3. 입력 데이터 검증
+SELECT
+ '=== INPUT DATA VALIDATION ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as points,
+ public.ST_IsValid(track_geom) as is_valid,
+ public.ST_AsText(track_geom) as wkt
+FROM test_vessel_tracks_hourly_for_daily
+ORDER BY sig_src_cd, target_id, time_bucket;
+
+-- 4. 실제 DailyTrackProcessor SQL 실행 (CAST 사용)
+-- Vessel: 000001_TEST001, Day: 2025-01-07
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_hourly_for_daily
+ WHERE sig_src_cd = '000001'
+ AND target_id = 'TEST001'
+ AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ '=== DAILY AGGREGATION RESULT (VESSEL 000001_TEST001) ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(merged_geom) as merged_points,
+ public.ST_IsValid(merged_geom) as is_valid,
+ total_distance,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points,
+ start_time,
+ end_time,
+ start_pos,
+ end_pos,
+ public.ST_AsText(merged_geom) as geom_text
+FROM calculated_tracks;
+
+-- 5. INSERT 테스트 (CAST 호환성 검증)
+INSERT INTO test_vessel_tracks_daily
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_hourly_for_daily
+ WHERE sig_src_cd = '000001'
+ AND target_id = 'TEST001'
+ AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ merged_geom as track_geom,
+ total_distance as distance_nm,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points as point_count,
+ start_pos as start_position,
+ end_pos as end_position
+FROM calculated_tracks;
+
+-- 6. 정박 선박 INSERT 테스트
+INSERT INTO test_vessel_tracks_daily
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_hourly_for_daily
+ WHERE sig_src_cd = '000002'
+ AND target_id = 'TEST002'
+ AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ merged_geom as track_geom,
+ total_distance as distance_nm,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points as point_count,
+ start_pos as start_position,
+ end_pos as end_position
+FROM calculated_tracks;
+
+-- 7. 단일 시간 선박 INSERT 테스트
+INSERT INTO test_vessel_tracks_daily
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_hourly_for_daily
+ WHERE sig_src_cd = '000003'
+ AND target_id = 'TEST003'
+ AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ merged_geom as track_geom,
+ total_distance as distance_nm,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points as point_count,
+ start_pos as start_position,
+ end_pos as end_position
+FROM calculated_tracks;
+
+-- 8. 최종 결과 검증
+SELECT
+ '=== FINAL DAILY AGGREGATION RESULTS ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as points,
+ public.ST_IsValid(track_geom) as is_valid,
+ distance_nm,
+ avg_speed,
+ max_speed,
+ point_count,
+ public.ST_AsText(track_geom) as wkt
+FROM test_vessel_tracks_daily
+ORDER BY sig_src_cd, target_id;
+
+-- 9. 타입 검증
+SELECT
+ '=== DATA TYPE VALIDATION ===' as section,
+ pg_typeof(time_bucket) as time_bucket_type,
+ pg_typeof(track_geom) as track_geom_type,
+ pg_typeof(distance_nm) as distance_type,
+ pg_typeof(avg_speed) as avg_speed_type,
+ pg_typeof(max_speed) as max_speed_type,
+ pg_typeof(point_count) as point_count_type,
+ pg_typeof(start_position) as start_position_type
+FROM test_vessel_tracks_daily
+LIMIT 1;
+
+-- 10. 시간 순서 검증 (M값이 증가하는지 확인)
+SELECT
+ '=== TIME ORDERING VALIDATION ===' as section,
+ sig_src_cd,
+ target_id,
+ public.ST_M(public.ST_PointN(track_geom, 1)) as first_m_value,
+ public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) as last_m_value,
+ CASE
+ WHEN public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) >=
+ public.ST_M(public.ST_PointN(track_geom, 1))
+ THEN 'PASS'
+ ELSE 'FAIL'
+ END as time_order_check
+FROM test_vessel_tracks_daily;
+
+-- 11. 정리
+DROP TABLE IF EXISTS test_vessel_tracks_hourly_for_daily CASCADE;
+DROP TABLE IF EXISTS test_vessel_tracks_daily CASCADE;
+
+-- ========================================
+-- 테스트 완료
+-- 모든 INSERT가 성공하고 타입 에러가 없으면 CAST 사용이 정상
+-- ========================================
diff --git a/scripts/test-hourly-aggregation-fixed.sql b/scripts/test-hourly-aggregation-fixed.sql
new file mode 100644
index 0000000..9a99d65
--- /dev/null
+++ b/scripts/test-hourly-aggregation-fixed.sql
@@ -0,0 +1,484 @@
+-- ========================================
+-- 시간별 집계 쿼리 검증 스크립트
+-- CAST 및 타입 호환성 테스트
+-- ========================================
+
+-- 1. 임시 테스트 테이블 생성
+DROP TABLE IF EXISTS test_vessel_tracks_5min CASCADE;
+DROP TABLE IF EXISTS test_vessel_tracks_hourly CASCADE;
+
+CREATE TABLE test_vessel_tracks_5min (
+ sig_src_cd VARCHAR(10),
+ target_id VARCHAR(20),
+ time_bucket TIMESTAMP,
+ track_geom geometry(LineStringM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ start_position JSONB,
+ end_position JSONB,
+ PRIMARY KEY (sig_src_cd, target_id, time_bucket)
+);
+
+CREATE TABLE test_vessel_tracks_hourly (
+ sig_src_cd VARCHAR(10),
+ target_id VARCHAR(20),
+ time_bucket TIMESTAMP,
+ track_geom geometry(LineStringM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ start_position JSONB,
+ end_position JSONB,
+ PRIMARY KEY (sig_src_cd, target_id, time_bucket)
+);
+
+-- 2. 샘플 데이터 삽입 (1시간치 5분 간격 데이터)
+-- 시나리오 1: 정상 이동 선박
+INSERT INTO test_vessel_tracks_5min VALUES
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 10:00:00',
+ public.ST_GeomFromText('LINESTRING M(126.5 37.5 1736215200, 126.51 37.51 1736215260, 126.52 37.52 1736215320)', 4326),
+ 0.5,
+ 10.5,
+ 12.0,
+ 3,
+ '{"lat": 37.5, "lon": 126.5, "time": "2025-01-07 10:00:00", "sog": 10.5}'::jsonb,
+ '{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 10:02:00", "sog": 11.0}'::jsonb
+),
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 10:05:00',
+ public.ST_GeomFromText('LINESTRING M(126.52 37.52 1736215500, 126.53 37.53 1736215560, 126.54 37.54 1736215620)', 4326),
+ 0.6,
+ 11.0,
+ 13.0,
+ 3,
+ '{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 10:05:00", "sog": 11.0}'::jsonb,
+ '{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 10:07:00", "sog": 12.0}'::jsonb
+),
+(
+ '000001',
+ 'TEST001',
+ '2025-01-07 10:10:00',
+ public.ST_GeomFromText('LINESTRING M(126.54 37.54 1736215800, 126.55 37.55 1736215860)', 4326),
+ 0.4,
+ 9.5,
+ 11.0,
+ 2,
+ '{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 10:10:00", "sog": 9.5}'::jsonb,
+ '{"lat": 37.55, "lon": 126.55, "time": "2025-01-07 10:11:00", "sog": 10.0}'::jsonb
+);
+
+-- 시나리오 2: 정박 선박 (같은 좌표 반복)
+INSERT INTO test_vessel_tracks_5min VALUES
+(
+ '000002',
+ 'TEST002',
+ '2025-01-07 10:00:00',
+ public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736215200, 129.0 35.0 1736215260)', 4326),
+ 0.0,
+ 0.0,
+ 0.5,
+ 2,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:01:00", "sog": 0.0}'::jsonb
+),
+(
+ '000002',
+ 'TEST002',
+ '2025-01-07 10:05:00',
+ public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736215500, 129.0 35.0 1736215560)', 4326),
+ 0.0,
+ 0.0,
+ 0.3,
+ 2,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:05:00", "sog": 0.0}'::jsonb,
+ '{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:06:00", "sog": 0.0}'::jsonb
+);
+
+-- 시나리오 3: 단일 포인트 (중복 포인트로 유효한 LineString)
+INSERT INTO test_vessel_tracks_5min VALUES
+(
+ '000003',
+ 'TEST003',
+ '2025-01-07 10:00:00',
+ public.ST_GeomFromText('LINESTRING M(130.0 36.0 1736215200, 130.0 36.0 1736215200)', 4326),
+ 0.0,
+ 0.0,
+ 0.0,
+ 1,
+ '{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb,
+ '{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb
+);
+
+-- 3. 입력 데이터 검증
+SELECT
+ '=== INPUT DATA VALIDATION ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as points,
+ public.ST_IsValid(track_geom) as is_valid,
+ public.ST_AsText(track_geom) as wkt
+FROM test_vessel_tracks_5min
+ORDER BY sig_src_cd, target_id, time_bucket;
+
+-- 4. 실제 HourlyTrackProcessor SQL 실행 (CAST 사용)
+-- Vessel: 000001_TEST001, Hour: 2025-01-07 10:00:00
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_5min
+ WHERE sig_src_cd = '000001'
+ AND target_id = 'TEST001'
+ AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ '=== HOURLY AGGREGATION RESULT (VESSEL 000001_TEST001) ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(merged_geom) as merged_points,
+ public.ST_IsValid(merged_geom) as is_valid,
+ total_distance,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points,
+ start_time,
+ end_time,
+ start_pos,
+ end_pos,
+ public.ST_AsText(merged_geom) as geom_text
+FROM calculated_tracks;
+
+-- 5. INSERT 테스트 (CAST 호환성 검증)
+INSERT INTO test_vessel_tracks_hourly
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_5min
+ WHERE sig_src_cd = '000001'
+ AND target_id = 'TEST001'
+ AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ merged_geom as track_geom,
+ total_distance as distance_nm,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points as point_count,
+ start_pos as start_position,
+ end_pos as end_position
+FROM calculated_tracks;
+
+-- 6. 정박 선박 INSERT 테스트
+INSERT INTO test_vessel_tracks_hourly
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_5min
+ WHERE sig_src_cd = '000002'
+ AND target_id = 'TEST002'
+ AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ merged_geom as track_geom,
+ total_distance as distance_nm,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points as point_count,
+ start_pos as start_position,
+ end_pos as end_position
+FROM calculated_tracks;
+
+-- 7. 단일 포인트 선박 INSERT 테스트
+INSERT INTO test_vessel_tracks_hourly
+WITH ordered_tracks AS (
+ SELECT *
+ FROM test_vessel_tracks_5min
+ WHERE sig_src_cd = '000003'
+ AND target_id = 'TEST003'
+ AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
+ AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ merged_geom as track_geom,
+ total_distance as distance_nm,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points as point_count,
+ start_pos as start_position,
+ end_pos as end_position
+FROM calculated_tracks;
+
+-- 8. 최종 결과 검증
+SELECT
+ '=== FINAL HOURLY AGGREGATION RESULTS ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as points,
+ public.ST_IsValid(track_geom) as is_valid,
+ distance_nm,
+ avg_speed,
+ max_speed,
+ point_count,
+ public.ST_AsText(track_geom) as wkt
+FROM test_vessel_tracks_hourly
+ORDER BY sig_src_cd, target_id;
+
+-- 9. 타입 검증
+SELECT
+ '=== DATA TYPE VALIDATION ===' as section,
+ pg_typeof(time_bucket) as time_bucket_type,
+ pg_typeof(track_geom) as track_geom_type,
+ pg_typeof(distance_nm) as distance_type,
+ pg_typeof(avg_speed) as avg_speed_type,
+ pg_typeof(max_speed) as max_speed_type,
+ pg_typeof(point_count) as point_count_type,
+ pg_typeof(start_position) as start_position_type
+FROM test_vessel_tracks_hourly
+LIMIT 1;
+
+-- 10. 시간 순서 검증 (M값이 증가하는지 확인)
+SELECT
+ '=== TIME ORDERING VALIDATION ===' as section,
+ sig_src_cd,
+ target_id,
+ public.ST_M(public.ST_PointN(track_geom, 1)) as first_m_value,
+ public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) as last_m_value,
+ CASE
+ WHEN public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) >=
+ public.ST_M(public.ST_PointN(track_geom, 1))
+ THEN 'PASS'
+ ELSE 'FAIL'
+ END as time_order_check
+FROM test_vessel_tracks_hourly;
+
+-- 11. 정리
+DROP TABLE IF EXISTS test_vessel_tracks_5min CASCADE;
+DROP TABLE IF EXISTS test_vessel_tracks_hourly CASCADE;
+
+-- ========================================
+-- 테스트 완료
+-- 모든 INSERT가 성공하고 타입 에러가 없으면 CAST 사용이 정상
+-- ========================================
diff --git a/scripts/test-with-real-data.sql b/scripts/test-with-real-data.sql
new file mode 100644
index 0000000..6ebe31b
--- /dev/null
+++ b/scripts/test-with-real-data.sql
@@ -0,0 +1,274 @@
+-- ========================================
+-- 실제 테이블 데이터로 CAST 호환성 테스트
+-- ========================================
+
+-- 1. 최근 5분 데이터 샘플 확인 (100개)
+SELECT
+ '=== SAMPLE 5MIN DATA ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as points,
+ public.ST_IsValid(track_geom) as is_valid
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ORDER BY time_bucket DESC
+LIMIT 100;
+
+-- 2. 테스트할 선박 선정 (최근 1시간 내 5분 데이터가 있는 선박)
+WITH recent_vessels AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ DATE_TRUNC('hour', time_bucket) as hour_bucket,
+ COUNT(*) as record_count,
+ MIN(time_bucket) as min_time,
+ MAX(time_bucket) as max_time
+ FROM signal.t_vessel_tracks_5min
+ WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
+ HAVING COUNT(*) >= 2
+ ORDER BY hour_bucket DESC
+ LIMIT 10
+)
+SELECT
+ '=== TEST CANDIDATE VESSELS ===' as section,
+ sig_src_cd,
+ target_id,
+ hour_bucket,
+ record_count,
+ min_time,
+ max_time
+FROM recent_vessels;
+
+-- 3. 특정 선박의 5분 데이터 상세 확인
+-- 아래 값들을 위 결과에서 선택해서 수정하세요
+-- 예시: sig_src_cd = '000019', target_id = '111440547', hour_bucket = '2025-01-07 10:00:00'
+\set test_sig_src_cd '000019'
+\set test_target_id '111440547'
+\set test_hour_start '''2025-01-07 10:00:00'''
+\set test_hour_end '''2025-01-07 11:00:00'''
+
+SELECT
+ '=== 5MIN DATA FOR TEST VESSEL ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(track_geom) as points,
+ public.ST_IsValid(track_geom) as is_valid,
+ public.ST_GeometryType(track_geom) as geom_type,
+ public.ST_AsText(track_geom) as wkt,
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)') as regex_v1,
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ) as regex_v2
+FROM signal.t_vessel_tracks_5min
+WHERE sig_src_cd = :'test_sig_src_cd'
+ AND target_id = :'test_target_id'
+ AND time_bucket >= CAST(:test_hour_start AS timestamp)
+ AND time_bucket < CAST(:test_hour_end AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ORDER BY time_bucket;
+
+-- 4. string_agg 결과 확인
+SELECT
+ '=== STRING_AGG TEST ===' as section,
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords,
+ COUNT(*) as track_count
+FROM signal.t_vessel_tracks_5min
+WHERE sig_src_cd = :'test_sig_src_cd'
+ AND target_id = :'test_target_id'
+ AND time_bucket >= CAST(:test_hour_start AS timestamp)
+ AND time_bucket < CAST(:test_hour_end AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+GROUP BY sig_src_cd, target_id;
+
+-- 5. 병합된 WKT로 geometry 생성 테스트
+WITH ordered_tracks AS (
+ SELECT *
+ FROM signal.t_vessel_tracks_5min
+ WHERE sig_src_cd = :'test_sig_src_cd'
+ AND target_id = :'test_target_id'
+ AND time_bucket >= CAST(:test_hour_start AS timestamp)
+ AND time_bucket < CAST(:test_hour_end AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+)
+SELECT
+ '=== WKT GENERATION TEST ===' as section,
+ sig_src_cd,
+ target_id,
+ 'LINESTRING M(' || all_coords || ')' as full_wkt,
+ LENGTH(all_coords) as coords_length,
+ public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326) as test_geom,
+ public.ST_NPoints(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as merged_points,
+ public.ST_IsValid(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as is_valid
+FROM merged_coords;
+
+-- 6. 전체 시간별 집계 쿼리 실행 (SELECT만, INSERT 안함)
+WITH ordered_tracks AS (
+ SELECT *
+ FROM signal.t_vessel_tracks_5min
+ WHERE sig_src_cd = :'test_sig_src_cd'
+ AND target_id = :'test_target_id'
+ AND time_bucket >= CAST(:test_hour_start AS timestamp)
+ AND time_bucket < CAST(:test_hour_end AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ CAST(:test_hour_start AS timestamp) as time_bucket,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
+ (SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
+ (SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
+ (SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
+ (SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
+ (SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
+ (SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
+ FROM merged_coords mc
+),
+calculated_tracks AS (
+ SELECT
+ *,
+ public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
+ CASE
+ WHEN public.ST_NPoints(merged_geom) > 0 THEN
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ ELSE
+ EXTRACT(EPOCH FROM
+ CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
+ )
+ END as time_diff_seconds
+ FROM merged_tracks
+)
+SELECT
+ '=== FULL HOURLY AGGREGATION TEST ===' as section,
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ public.ST_NPoints(merged_geom) as merged_points,
+ public.ST_IsValid(merged_geom) as is_valid,
+ total_distance,
+ CASE
+ WHEN time_diff_seconds > 0 THEN
+ CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
+ ELSE 0
+ END as avg_speed,
+ max_speed,
+ total_points,
+ start_time,
+ end_time,
+ start_pos,
+ end_pos,
+ public.ST_AsText(merged_geom) as geom_text,
+ time_diff_seconds
+FROM calculated_tracks;
+
+-- 7. M값 시간 순서 검증
+WITH ordered_tracks AS (
+ SELECT *
+ FROM signal.t_vessel_tracks_5min
+ WHERE sig_src_cd = :'test_sig_src_cd'
+ AND target_id = :'test_target_id'
+ AND time_bucket >= CAST(:test_hour_start AS timestamp)
+ AND time_bucket < CAST(:test_hour_end AS timestamp)
+ AND track_geom IS NOT NULL
+ AND public.ST_NPoints(track_geom) > 0
+ ORDER BY time_bucket
+),
+merged_coords AS (
+ SELECT
+ sig_src_cd,
+ target_id,
+ string_agg(
+ COALESCE(
+ substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
+ substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
+ ),
+ ','
+ ORDER BY time_bucket
+ ) FILTER (WHERE track_geom IS NOT NULL) as all_coords
+ FROM ordered_tracks
+ GROUP BY sig_src_cd, target_id
+),
+merged_tracks AS (
+ SELECT
+ mc.sig_src_cd,
+ mc.target_id,
+ public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom
+ FROM merged_coords mc
+)
+SELECT
+ '=== TIME ORDERING CHECK ===' as section,
+ sig_src_cd,
+ target_id,
+ public.ST_M(public.ST_PointN(merged_geom, 1)) as first_m_value,
+ to_timestamp(public.ST_M(public.ST_PointN(merged_geom, 1))) as first_time,
+ public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) as last_m_value,
+ to_timestamp(public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom)))) as last_time,
+ CASE
+ WHEN public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) >=
+ public.ST_M(public.ST_PointN(merged_geom, 1))
+ THEN 'PASS'
+ ELSE 'FAIL'
+ END as time_order_check
+FROM merged_tracks;
+
+-- ========================================
+-- 사용 방법:
+-- 1. 먼저 쿼리 2번 실행해서 테스트할 선박 선택
+-- 2. \set 변수 값 수정 (라인 48-51)
+-- 3. 전체 스크립트 실행
+-- 4. 각 섹션별 결과 확인
+-- ========================================
diff --git a/scripts/vessel-batch-control.sh b/scripts/vessel-batch-control.sh
new file mode 100644
index 0000000..687d390
--- /dev/null
+++ b/scripts/vessel-batch-control.sh
@@ -0,0 +1,215 @@
+#!/bin/bash
+
+# Vessel Batch 관리 스크립트
+# 시작, 중지, 상태 확인 등 기본 관리 기능
+
+# 애플리케이션 경로
+APP_HOME="/devdata/apps/bridge-db-monitoring"
+JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
+PID_FILE="$APP_HOME/vessel-batch.pid"
+LOG_DIR="$APP_HOME/logs"
+
+# Java 17 경로
+JAVA_HOME="/devdata/apps/jdk-17.0.8"
+JAVA_BIN="$JAVA_HOME/bin/java"
+
+# 색상 코드
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m'
+
+# 함수: PID 확인
+get_pid() {
+ if [ -f "$PID_FILE" ]; then
+ PID=$(cat $PID_FILE)
+ if kill -0 $PID 2>/dev/null; then
+ echo $PID
+ else
+ rm -f $PID_FILE
+ echo ""
+ fi
+ else
+ PID=$(pgrep -f "$JAR_FILE")
+ echo $PID
+ fi
+}
+
+# 함수: 상태 확인
+status() {
+ PID=$(get_pid)
+ if [ ! -z "$PID" ]; then
+ echo -e "${GREEN}✓ Vessel Batch is running (PID: $PID)${NC}"
+
+ # 프로세스 정보
+ echo ""
+ ps aux | grep $PID | grep -v grep
+
+ # Health Check
+ echo ""
+ echo "Health Check:"
+ curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not available"
+
+ # 처리 상태
+ echo ""
+ echo "Processing Status:"
+ if command -v psql >/dev/null 2>&1; then
+ psql -h localhost -U mda -d mdadb -c "
+ SELECT
+ NOW() - MAX(last_update) as processing_delay,
+ COUNT(*) as vessel_count
+ FROM signal.t_vessel_latest_position;" 2>/dev/null || echo "Unable to query database"
+ fi
+
+ return 0
+ else
+ echo -e "${RED}✗ Vessel Batch is not running${NC}"
+ return 1
+ fi
+}
+
+# 함수: 시작
+start() {
+ PID=$(get_pid)
+ if [ ! -z "$PID" ]; then
+ echo -e "${YELLOW}Vessel Batch is already running (PID: $PID)${NC}"
+ return 1
+ fi
+
+ echo "Starting Vessel Batch..."
+ cd $APP_HOME
+ $APP_HOME/run-on-query-server-dev.sh
+}
+
+# 함수: 중지
+stop() {
+ PID=$(get_pid)
+ if [ -z "$PID" ]; then
+ echo -e "${YELLOW}Vessel Batch is not running${NC}"
+ return 1
+ fi
+
+ echo "Stopping Vessel Batch (PID: $PID)..."
+ kill -15 $PID
+
+ # 종료 대기
+ for i in {1..30}; do
+ if ! kill -0 $PID 2>/dev/null; then
+ echo -e "${GREEN}✓ Vessel Batch stopped successfully${NC}"
+ rm -f $PID_FILE
+ return 0
+ fi
+ echo -n "."
+ sleep 1
+ done
+
+ echo ""
+ echo -e "${RED}Process did not stop gracefully, force killing...${NC}"
+ kill -9 $PID
+ rm -f $PID_FILE
+}
+
+# 함수: 재시작
+restart() {
+ echo "Restarting Vessel Batch..."
+ stop
+ sleep 3
+ start
+}
+
+# 함수: 로그 보기
+logs() {
+ if [ ! -d "$LOG_DIR" ]; then
+ echo "Log directory not found: $LOG_DIR"
+ return 1
+ fi
+
+ echo "Available log files:"
+ ls -lh $LOG_DIR/*.log 2>/dev/null
+
+ echo ""
+ echo "Tailing app.log (Ctrl+C to exit)..."
+ tail -f $LOG_DIR/app.log
+}
+
+# 함수: 최근 에러 확인
+errors() {
+ if [ ! -f "$LOG_DIR/app.log" ]; then
+ echo "Log file not found: $LOG_DIR/app.log"
+ return 1
+ fi
+
+ echo "Recent errors (last 50 lines with ERROR):"
+ grep "ERROR" $LOG_DIR/app.log | tail -50
+
+ echo ""
+ echo "Error summary:"
+ echo "Total errors: $(grep -c "ERROR" $LOG_DIR/app.log)"
+ echo "Errors today: $(grep "ERROR" $LOG_DIR/app.log | grep "$(date +%Y-%m-%d)" | wc -l)"
+}
+
+# 함수: 성능 통계
+stats() {
+ echo "Performance Statistics"
+ echo "===================="
+
+ if [ -f "$LOG_DIR/resource-monitor.csv" ]; then
+ echo "Recent resource usage:"
+ tail -5 $LOG_DIR/resource-monitor.csv | column -t -s,
+ fi
+
+ echo ""
+ echo "Batch job statistics:"
+ if command -v psql >/dev/null 2>&1; then
+ psql -h localhost -U mda -d mdadb -c "
+ SELECT
+ job_name,
+ COUNT(*) as executions,
+ AVG(EXTRACT(EPOCH FROM (end_time - start_time))/60)::numeric(10,2) as avg_duration_min,
+ MAX(end_time) as last_execution
+ FROM batch_job_execution je
+ JOIN batch_job_instance ji ON je.job_instance_id = ji.job_instance_id
+ WHERE end_time > CURRENT_DATE - INTERVAL '7 days'
+ GROUP BY job_name;" 2>/dev/null || echo "Unable to query batch statistics"
+ fi
+}
+
+# 메인 로직
+case "$1" in
+ start)
+ start
+ ;;
+ stop)
+ stop
+ ;;
+ restart)
+ restart
+ ;;
+ status)
+ status
+ ;;
+ logs)
+ logs
+ ;;
+ errors)
+ errors
+ ;;
+ stats)
+ stats
+ ;;
+ *)
+ echo "Usage: $0 {start|stop|restart|status|logs|errors|stats}"
+ echo ""
+ echo "Commands:"
+ echo " start - Start the Vessel Batch application"
+ echo " stop - Stop the Vessel Batch application"
+ echo " restart - Restart the Vessel Batch application"
+ echo " status - Check application status and health"
+ echo " logs - Tail application logs"
+ echo " errors - Show recent errors from logs"
+ echo " stats - Show performance statistics"
+ exit 1
+ ;;
+esac
+
+exit $?
diff --git a/scripts/vessel-batch-start-prod.sh b/scripts/vessel-batch-start-prod.sh
new file mode 100644
index 0000000..8b7d5f4
--- /dev/null
+++ b/scripts/vessel-batch-start-prod.sh
@@ -0,0 +1,191 @@
+#!/bin/bash
+
+# Query DB 서버에서 최적화된 실행 스크립트 (PROD 프로파일)
+# Rocky Linux 환경에 맞춰 조정됨
+# Java 17 경로 명시적 지정
+
+# 애플리케이션 경로
+APP_HOME="/devdata/apps/bridge-db-monitoring"
+JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
+
+# Java 17 경로
+JAVA_HOME="/devdata/apps/jdk-17.0.8"
+JAVA_BIN="$JAVA_HOME/bin/java"
+
+# 로그 디렉토리
+LOG_DIR="$APP_HOME/logs"
+mkdir -p $LOG_DIR
+
+echo "================================================"
+echo "Vessel Batch Aggregation - PROD Profile"
+echo "Start Time: $(date)"
+echo "================================================"
+
+# 경로 확인
+echo "Environment Check:"
+echo "- App Home: $APP_HOME"
+echo "- JAR File: $JAR_FILE"
+echo "- Java Path: $JAVA_BIN"
+echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
+
+# JAR 파일 존재 확인
+if [ ! -f "$JAR_FILE" ]; then
+ echo "ERROR: JAR file not found at $JAR_FILE"
+ exit 1
+fi
+
+# Java 실행 파일 확인
+if [ ! -x "$JAVA_BIN" ]; then
+ echo "ERROR: Java not found or not executable at $JAVA_BIN"
+ exit 1
+fi
+
+# 서버 정보 확인
+echo ""
+echo "Server Info:"
+echo "- Hostname: $(hostname)"
+echo "- CPU Cores: $(nproc)"
+echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
+echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
+
+# 환경 변수 설정 (PROD 프로파일)
+export SPRING_PROFILES_ACTIVE=prod
+
+# Query DB와 Batch Meta DB를 localhost로 오버라이드
+export SPRING_DATASOURCE_QUERY_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=signal&options=-csearch_path=signal,public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
+export SPRING_DATASOURCE_BATCH_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
+
+# 서버 CPU 코어 수에 따른 병렬 처리 조정
+CPU_CORES=$(nproc)
+export VESSEL_BATCH_PARTITION_SIZE=$((CPU_CORES * 2))
+export VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS=$((CPU_CORES / 2))
+
+echo ""
+echo "Optimized Settings:"
+echo "- Active Profile: PROD"
+echo "- Partition Size: $VESSEL_BATCH_PARTITION_SIZE"
+echo "- Parallel Threads: $VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS"
+echo "- Query DB: localhost (optimized)"
+echo "- Batch Meta DB: localhost (optimized)"
+
+# JVM 옵션 (서버 메모리에 맞게 조정)
+TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
+JVM_HEAP=$((TOTAL_MEM / 8)) # 전체 메모리의 25% 사용
+
+# 최소 16GB, 최대 32GB로 제한
+if [ $JVM_HEAP -lt 8 ]; then
+ JVM_HEAP=8
+elif [ $JVM_HEAP -gt 16 ]; then
+ JVM_HEAP=16
+fi
+
+JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
+ -XX:+UseG1GC \
+ -XX:MaxGCPauseMillis=200 \
+ -XX:+UseStringDeduplication \
+ -XX:+ParallelRefProcEnabled \
+ -XX:ParallelGCThreads=$((CPU_CORES / 2)) \
+ -XX:ConcGCThreads=$((CPU_CORES / 4)) \
+ -XX:+HeapDumpOnOutOfMemoryError \
+ -XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
+ -Dfile.encoding=UTF-8 \
+ -Duser.timezone=Asia/Seoul \
+ -Djava.security.egd=file:/dev/./urandom \
+ -Dspring.profiles.active=prod"
+
+echo "- JVM Heap Size: ${JVM_HEAP}GB"
+
+# 기존 프로세스 확인 및 종료
+echo ""
+echo "Checking for existing process..."
+PID=$(pgrep -f "$JAR_FILE")
+if [ ! -z "$PID" ]; then
+ echo "Stopping existing process (PID: $PID)..."
+ kill -15 $PID
+
+ # 프로세스 종료 대기 (최대 30초)
+ for i in {1..30}; do
+ if ! kill -0 $PID 2>/dev/null; then
+ echo "Process stopped successfully."
+ break
+ fi
+ if [ $i -eq 30 ]; then
+ echo "Force killing process..."
+ kill -9 $PID
+ fi
+ sleep 1
+ done
+fi
+
+# 작업 디렉토리로 이동
+cd $APP_HOME
+
+# 애플리케이션 실행 (nice로 우선순위 조정)
+echo ""
+echo "Starting application with PROD profile..."
+echo "Command: nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
+echo ""
+
+# nohup으로 백그라운드 실행
+nohup nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
+ > $LOG_DIR/app.log 2>&1 &
+
+NEW_PID=$!
+echo "Application started with PID: $NEW_PID"
+
+# PID 파일 생성
+echo $NEW_PID > $APP_HOME/vessel-batch.pid
+
+# 시작 확인 (30초 대기)
+echo "Waiting for application startup..."
+STARTUP_SUCCESS=false
+for i in {1..30}; do
+ if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
+ echo "✅ Application started successfully!"
+ STARTUP_SUCCESS=true
+ break
+ fi
+ echo -n "."
+ sleep 1
+done
+
+if [ "$STARTUP_SUCCESS" = false ]; then
+ echo ""
+ echo "⚠️ Application startup timeout. Check logs for errors."
+ echo "Log file: $LOG_DIR/app.log"
+ tail -20 $LOG_DIR/app.log
+fi
+
+echo ""
+echo "================================================"
+echo "Deployment Complete!"
+echo "- Profile: PROD"
+echo "- PID: $NEW_PID"
+echo "- PID File: $APP_HOME/vessel-batch.pid"
+echo "- Log: $LOG_DIR/app.log"
+echo "- Monitor: tail -f $LOG_DIR/app.log"
+echo "================================================"
+
+# 초기 상태 확인
+sleep 5
+echo ""
+echo "Initial Status Check:"
+curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
+
+# 활성 프로파일 확인
+echo ""
+echo "Active Profile Check:"
+curl -s http://localhost:8090/actuator/env | grep -A 5 "activeProfiles" 2>/dev/null || echo "Env endpoint not yet available"
+
+# 리소스 사용량 표시
+echo ""
+echo "Resource Usage:"
+ps aux | grep $NEW_PID | grep -v grep
+
+# 빠른 명령어 안내
+echo ""
+echo "Useful Commands:"
+echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-batch.pid)"
+echo "- Logs: tail -f $LOG_DIR/app.log"
+echo "- Status: curl http://localhost:8090/actuator/health"
+echo "- Monitor: $APP_HOME/monitor-query-server.sh"
diff --git a/scripts/websocket-load-test.py b/scripts/websocket-load-test.py
new file mode 100644
index 0000000..f7aecd1
--- /dev/null
+++ b/scripts/websocket-load-test.py
@@ -0,0 +1,175 @@
+#!/usr/bin/env python3
+"""
+WebSocket 부하 테스트 자동화 스크립트
+"""
+import asyncio
+import json
+import time
+import statistics
+from datetime import datetime, timedelta
+import websockets
+import stomper
+from concurrent.futures import ThreadPoolExecutor
+
+class WebSocketLoadTest:
+ def __init__(self, base_url="ws://10.26.252.48:8090/ws-tracks"):
+ self.base_url = base_url
+ self.results = []
+ self.active_connections = 0
+
+ async def single_client_test(self, client_id, duration_seconds=60):
+ """단일 클라이언트 테스트"""
+ start_time = time.time()
+ messages_received = 0
+ bytes_received = 0
+ errors = 0
+
+ try:
+ async with websockets.connect(self.base_url) as websocket:
+ self.active_connections += 1
+ print(f"Client {client_id}: Connected")
+
+ # STOMP CONNECT
+ connect_frame = stomper.connect(host='/', accept_version='1.2')
+ await websocket.send(connect_frame)
+
+ # Subscribe to data channel
+ sub_frame = stomper.subscribe('/user/queue/tracks/data', client_id)
+ await websocket.send(sub_frame)
+
+ # Send query request
+ query_request = {
+ "startTime": (datetime.now() - timedelta(days=1)).isoformat(),
+ "endTime": datetime.now().isoformat(),
+ "viewport": {
+ "minLon": 124.0,
+ "maxLon": 132.0,
+ "minLat": 33.0,
+ "maxLat": 38.0
+ },
+ "filters": {
+ "minDistance": 10,
+ "minSpeed": 5
+ },
+ "chunkSize": 2000
+ }
+
+ send_frame = stomper.send('/app/tracks/query', json.dumps(query_request))
+ await websocket.send(send_frame)
+
+ # Receive messages
+ while time.time() - start_time < duration_seconds:
+ try:
+ message = await asyncio.wait_for(websocket.recv(), timeout=1.0)
+ messages_received += 1
+ bytes_received += len(message)
+
+ # Parse STOMP frame
+ frame = stomper.Frame()
+ frame.parse(message)
+
+ if frame.cmd == 'MESSAGE':
+ data = json.loads(frame.body)
+ if data.get('type') == 'complete':
+ print(f"Client {client_id}: Query completed")
+ break
+
+ except asyncio.TimeoutError:
+ continue
+ except Exception as e:
+ errors += 1
+ print(f"Client {client_id}: Error - {e}")
+
+ except Exception as e:
+ errors += 1
+ print(f"Client {client_id}: Connection error - {e}")
+ finally:
+ self.active_connections -= 1
+
+ # Calculate results
+ elapsed_time = time.time() - start_time
+ result = {
+ 'client_id': client_id,
+ 'duration': elapsed_time,
+ 'messages': messages_received,
+ 'bytes': bytes_received,
+ 'errors': errors,
+ 'msg_per_sec': messages_received / elapsed_time if elapsed_time > 0 else 0,
+ 'mbps': (bytes_received / 1024 / 1024) / elapsed_time if elapsed_time > 0 else 0
+ }
+
+ self.results.append(result)
+ return result
+
+ async def run_load_test(self, num_clients=10, duration=60):
+ """병렬 부하 테스트 실행"""
+ print(f"Starting load test with {num_clients} clients for {duration} seconds...")
+
+ tasks = []
+ for i in range(num_clients):
+ task = asyncio.create_task(self.single_client_test(i, duration))
+ tasks.append(task)
+ await asyncio.sleep(0.1) # Stagger connections
+
+ # Wait for all clients to complete
+ await asyncio.gather(*tasks)
+
+ # Print summary
+ self.print_summary()
+
+ def print_summary(self):
+ """테스트 결과 요약 출력"""
+ print("\n" + "="*60)
+ print("LOAD TEST SUMMARY")
+ print("="*60)
+
+ total_messages = sum(r['messages'] for r in self.results)
+ total_bytes = sum(r['bytes'] for r in self.results)
+ total_errors = sum(r['errors'] for r in self.results)
+ avg_msg_per_sec = statistics.mean(r['msg_per_sec'] for r in self.results)
+ avg_mbps = statistics.mean(r['mbps'] for r in self.results)
+
+ print(f"Total Clients: {len(self.results)}")
+ print(f"Total Messages: {total_messages:,}")
+ print(f"Total Data: {total_bytes/1024/1024:.2f} MB")
+ print(f"Total Errors: {total_errors}")
+ print(f"Avg Messages/sec per client: {avg_msg_per_sec:.2f}")
+ print(f"Avg Throughput per client: {avg_mbps:.2f} MB/s")
+ print(f"Total Throughput: {avg_mbps * len(self.results):.2f} MB/s")
+
+ # Error rate
+ error_rate = (total_errors / len(self.results)) * 100 if self.results else 0
+ print(f"Error Rate: {error_rate:.2f}%")
+
+ # Success rate
+ successful_clients = sum(1 for r in self.results if r['errors'] == 0)
+ success_rate = (successful_clients / len(self.results)) * 100 if self.results else 0
+ print(f"Success Rate: {success_rate:.2f}%")
+
+ print("="*60)
+
+async def main():
+ # Test scenarios
+ scenarios = [
+ {"clients": 10, "duration": 60, "name": "Light Load"},
+ {"clients": 50, "duration": 120, "name": "Medium Load"},
+ {"clients": 100, "duration": 180, "name": "Heavy Load"}
+ ]
+
+ for scenario in scenarios:
+ print(f"\n{'='*60}")
+ print(f"Running scenario: {scenario['name']}")
+ print(f"{'='*60}")
+
+ tester = WebSocketLoadTest()
+ await tester.run_load_test(
+ num_clients=scenario['clients'],
+ duration=scenario['duration']
+ )
+
+ # Wait between scenarios
+ print(f"\nWaiting 30 seconds before next scenario...")
+ await asyncio.sleep(30)
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/sql/V2_snp_schema_migration.sql b/sql/V2_snp_schema_migration.sql
new file mode 100644
index 0000000..0462c0d
--- /dev/null
+++ b/sql/V2_snp_schema_migration.sql
@@ -0,0 +1,584 @@
+-- ============================================================
+-- gc-signal-batch V2: SNP API 기반 스키마 (신규 생성)
+-- 타겟 DB: snpdb (211.208.115.83), 스키마: signal
+--
+-- 핵심 변경:
+-- sig_src_cd + target_id → mmsi VARCHAR(20) 단일 식별자
+-- t_vessel_latest_position → t_ais_position (새 구조)
+-- 신규: t_vessel_static (정적 정보 이력)
+--
+-- 실행 전 확인:
+-- 1. PostGIS 확장이 설치되어 있는지 확인
+-- 2. signal 스키마가 존재하는지 확인
+-- 3. 파티션 테이블은 PartitionManager가 런타임에 자동 생성
+-- ============================================================
+
+-- 스키마 생성
+CREATE SCHEMA IF NOT EXISTS signal;
+
+-- PostGIS 확장 활성화
+CREATE EXTENSION IF NOT EXISTS postgis;
+
+-- ============================================================
+-- 1. AIS 위치/정적 정보 (SNP API 전용, 신규)
+-- ============================================================
+
+-- t_ais_position: AIS 최신 위치 (MMSI별 1건 UPSERT)
+-- 용도: 캐시 복원, 타 프로세스 최신 위치 조회, API 불가 환경 대응
+-- 갱신: 5분 집계 Job에서 캐시 스냅샷 UPSERT
+CREATE TABLE IF NOT EXISTS signal.t_ais_position (
+ mmsi VARCHAR(20) PRIMARY KEY,
+ imo BIGINT,
+ name VARCHAR(50),
+ callsign VARCHAR(20),
+ vessel_type VARCHAR(50),
+ extra_info VARCHAR(200),
+ lat DOUBLE PRECISION NOT NULL,
+ lon DOUBLE PRECISION NOT NULL,
+ geom GEOMETRY(POINT, 4326),
+ heading DOUBLE PRECISION,
+ sog DOUBLE PRECISION,
+ cog DOUBLE PRECISION,
+ rot INTEGER,
+ length INTEGER,
+ width INTEGER,
+ draught DOUBLE PRECISION,
+ destination VARCHAR(200),
+ eta TIMESTAMPTZ,
+ status VARCHAR(50),
+ message_timestamp TIMESTAMPTZ NOT NULL,
+ signal_kind_code VARCHAR(10),
+ class_type VARCHAR(1),
+ last_update TIMESTAMPTZ DEFAULT NOW()
+);
+
+CREATE INDEX IF NOT EXISTS idx_ais_position_geom ON signal.t_ais_position USING GIST (geom);
+CREATE INDEX IF NOT EXISTS idx_ais_position_signal_kind ON signal.t_ais_position (signal_kind_code);
+CREATE INDEX IF NOT EXISTS idx_ais_position_timestamp ON signal.t_ais_position (message_timestamp);
+
+COMMENT ON TABLE signal.t_ais_position IS 'AIS 최신 위치 (MMSI별 1건, 5분 집계 Job에서 UPSERT)';
+COMMENT ON COLUMN signal.t_ais_position.mmsi IS 'MMSI (VARCHAR — 문자 혼합 MMSI 장비 지원)';
+COMMENT ON COLUMN signal.t_ais_position.signal_kind_code IS 'MDA 범례코드 (SignalKindCode.resolve 결과)';
+
+-- t_vessel_static: 정적 정보 이력 (위변조/흘수 변경 추적)
+-- 전략: COALESCE + CDC 하이브리드 (HourlyJob에서 저장)
+-- 보존: 90일
+CREATE TABLE IF NOT EXISTS signal.t_vessel_static (
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket TIMESTAMPTZ NOT NULL,
+ imo BIGINT,
+ name VARCHAR(50),
+ callsign VARCHAR(20),
+ vessel_type VARCHAR(50),
+ extra_info VARCHAR(200),
+ length INTEGER,
+ width INTEGER,
+ draught DOUBLE PRECISION,
+ destination VARCHAR(200),
+ eta TIMESTAMPTZ,
+ status VARCHAR(50),
+ signal_kind_code VARCHAR(10),
+ class_type VARCHAR(1),
+ PRIMARY KEY (mmsi, time_bucket)
+);
+
+CREATE INDEX IF NOT EXISTS idx_vessel_static_mmsi ON signal.t_vessel_static (mmsi);
+
+COMMENT ON TABLE signal.t_vessel_static IS '선박 정적 정보 이력 (시간별, COALESCE+CDC). 보존 90일';
+
+-- ============================================================
+-- 2. 핵심 항적 테이블 (5분/시간/일별 — 파티션)
+-- ============================================================
+
+-- t_vessel_tracks_5min: 5분 단위 항적 (일별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_5min (
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ track_geom GEOMETRY(LINESTRINGM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ start_position JSONB,
+ end_position JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_vessel_tracks_5min_pkey PRIMARY KEY (mmsi, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_tracks_5min_mmsi ON signal.t_vessel_tracks_5min (mmsi);
+CREATE INDEX IF NOT EXISTS idx_tracks_5min_bucket ON signal.t_vessel_tracks_5min (time_bucket);
+
+COMMENT ON TABLE signal.t_vessel_tracks_5min IS '선박 항적 5분 단위 집계';
+COMMENT ON COLUMN signal.t_vessel_tracks_5min.mmsi IS 'MMSI (VARCHAR)';
+COMMENT ON COLUMN signal.t_vessel_tracks_5min.track_geom IS 'LineStringM 형식 항적 (M값은 첫 포인트 기준 상대시간 초)';
+COMMENT ON COLUMN signal.t_vessel_tracks_5min.start_position IS '시작 위치 JSON {lat, lon, time, sog}';
+COMMENT ON COLUMN signal.t_vessel_tracks_5min.end_position IS '종료 위치 JSON {lat, lon, time, sog}';
+
+-- t_vessel_tracks_hourly: 시간별 항적 (월별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_hourly (
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ track_geom GEOMETRY(LINESTRINGM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ start_position JSONB,
+ end_position JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_vessel_tracks_hourly_pkey PRIMARY KEY (mmsi, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_tracks_hourly_mmsi ON signal.t_vessel_tracks_hourly (mmsi);
+CREATE INDEX IF NOT EXISTS idx_tracks_hourly_bucket ON signal.t_vessel_tracks_hourly (time_bucket);
+CREATE INDEX IF NOT EXISTS idx_tracks_hourly_geom ON signal.t_vessel_tracks_hourly USING GIST (track_geom);
+
+COMMENT ON TABLE signal.t_vessel_tracks_hourly IS '선박 항적 시간별 집계';
+
+-- t_vessel_tracks_daily: 일별 항적 (월별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_daily (
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket DATE NOT NULL,
+ track_geom GEOMETRY(LINESTRINGM, 4326),
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ operating_hours NUMERIC(4,2),
+ port_visits JSONB,
+ start_position JSONB,
+ end_position JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_vessel_tracks_daily_pkey PRIMARY KEY (mmsi, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_tracks_daily_mmsi ON signal.t_vessel_tracks_daily (mmsi);
+CREATE INDEX IF NOT EXISTS idx_tracks_daily_bucket ON signal.t_vessel_tracks_daily (time_bucket);
+CREATE INDEX IF NOT EXISTS idx_tracks_daily_geom ON signal.t_vessel_tracks_daily USING GIST (track_geom);
+
+COMMENT ON TABLE signal.t_vessel_tracks_daily IS '선박 항적 일별 집계';
+
+-- ============================================================
+-- 3. 해구(Grid) 관련 테이블 — 파티션
+-- ============================================================
+
+-- t_haegu_definitions: 대해구 정의 (일반 테이블)
+CREATE TABLE IF NOT EXISTS signal.t_haegu_definitions (
+ haegu_no INTEGER NOT NULL,
+ min_lat DOUBLE PRECISION NOT NULL,
+ min_lon DOUBLE PRECISION NOT NULL,
+ max_lat DOUBLE PRECISION NOT NULL,
+ max_lon DOUBLE PRECISION NOT NULL,
+ center_lat DOUBLE PRECISION NOT NULL,
+ center_lon DOUBLE PRECISION NOT NULL,
+ geom GEOMETRY(MULTIPOLYGON, 4326) NOT NULL,
+ center_point GEOMETRY(POINT, 4326) NOT NULL,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_haegu_definitions_pkey PRIMARY KEY (haegu_no)
+);
+
+CREATE INDEX IF NOT EXISTS idx_haegu_definitions_geom ON signal.t_haegu_definitions USING GIST (geom);
+
+COMMENT ON TABLE signal.t_haegu_definitions IS '대해구 정의 정보';
+
+-- t_grid_tiles: 그리드 타일 정의 (일반 테이블)
+CREATE TABLE IF NOT EXISTS signal.t_grid_tiles (
+ tile_id VARCHAR(50) NOT NULL,
+ tile_level INTEGER NOT NULL,
+ haegu_no INTEGER NOT NULL,
+ sohaegu_no INTEGER,
+ min_lat DOUBLE PRECISION NOT NULL,
+ min_lon DOUBLE PRECISION NOT NULL,
+ max_lat DOUBLE PRECISION NOT NULL,
+ max_lon DOUBLE PRECISION NOT NULL,
+ tile_geom GEOMETRY(POLYGON, 4326) NOT NULL,
+ center_point GEOMETRY(POINT, 4326) NOT NULL,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_grid_tiles_pkey PRIMARY KEY (tile_id)
+);
+
+CREATE INDEX IF NOT EXISTS idx_grid_tiles_tile_geom ON signal.t_grid_tiles USING GIST (tile_geom);
+CREATE INDEX IF NOT EXISTS idx_grid_tiles_haegu ON signal.t_grid_tiles (haegu_no);
+CREATE INDEX IF NOT EXISTS idx_grid_tiles_level ON signal.t_grid_tiles (tile_level);
+CREATE INDEX IF NOT EXISTS idx_grid_tiles_haegu_sohaegu ON signal.t_grid_tiles (haegu_no, sohaegu_no);
+
+COMMENT ON TABLE signal.t_grid_tiles IS '그리드 타일 정의 (대해구/소해구)';
+
+-- t_grid_vessel_tracks: 해구별 선박 항적 (5분, 일별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_grid_vessel_tracks (
+ haegu_no INTEGER NOT NULL,
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ point_count INTEGER,
+ entry_time TIMESTAMP,
+ exit_time TIMESTAMP,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_grid_vessel_tracks_pkey PRIMARY KEY (haegu_no, mmsi, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_grid_vessel_tracks_mmsi_time ON signal.t_grid_vessel_tracks (mmsi, time_bucket DESC);
+CREATE INDEX IF NOT EXISTS idx_grid_vessel_tracks_haegu_time ON signal.t_grid_vessel_tracks (haegu_no, time_bucket DESC);
+
+COMMENT ON TABLE signal.t_grid_vessel_tracks IS '해구별 선박 항적 (5분 단위)';
+
+-- t_grid_tracks_summary: 해구별 항적 요약 (5분, 일별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary (
+ haegu_no INTEGER NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ total_vessels INTEGER,
+ total_distance_nm NUMERIC(12,2),
+ avg_speed NUMERIC(6,2),
+ vessel_list JSONB,
+ traffic_density NUMERIC(10,4),
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_grid_tracks_summary_pkey PRIMARY KEY (haegu_no, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+COMMENT ON TABLE signal.t_grid_tracks_summary IS '해구별 5분 단위 항적 요약 통계';
+COMMENT ON COLUMN signal.t_grid_tracks_summary.vessel_list IS '선박별 상세 정보 [{mmsi, distance_nm, avg_speed}]';
+
+-- t_grid_tracks_summary_hourly: 해구별 시간별 요약 (월별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary_hourly (
+ haegu_no INTEGER NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ total_vessels INTEGER,
+ total_distance_nm NUMERIC(12,2),
+ avg_speed NUMERIC(6,2),
+ vessel_list JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_grid_tracks_summary_hourly_pkey PRIMARY KEY (haegu_no, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_grid_tracks_summary_hourly_time ON signal.t_grid_tracks_summary_hourly (time_bucket DESC, haegu_no);
+
+COMMENT ON TABLE signal.t_grid_tracks_summary_hourly IS '해구별 시간별 항적 요약 통계';
+
+-- t_grid_tracks_summary_daily: 해구별 일별 요약 (월별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary_daily (
+ haegu_no INTEGER NOT NULL,
+ time_bucket DATE NOT NULL,
+ total_vessels INTEGER,
+ total_distance_nm NUMERIC(12,2),
+ avg_speed NUMERIC(6,2),
+ vessel_list JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_grid_tracks_summary_daily_pkey PRIMARY KEY (haegu_no, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_grid_tracks_summary_daily_time ON signal.t_grid_tracks_summary_daily (time_bucket DESC, haegu_no);
+
+COMMENT ON TABLE signal.t_grid_tracks_summary_daily IS '해구별 일일 항적 요약 통계';
+
+-- ============================================================
+-- 4. 영역(Area) 관련 테이블 — 파티션
+-- ============================================================
+
+-- t_areas: 사용자 정의 영역 (일반 테이블)
+CREATE TABLE IF NOT EXISTS signal.t_areas (
+ area_id VARCHAR(50) NOT NULL,
+ area_name VARCHAR(100) NOT NULL,
+ area_type VARCHAR(20) NOT NULL,
+ area_geom GEOMETRY(MULTIPOLYGON, 4326) NOT NULL,
+ properties JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_areas_pkey PRIMARY KEY (area_id)
+);
+
+CREATE INDEX IF NOT EXISTS idx_t_areas_area_geom ON signal.t_areas USING GIST (area_geom);
+
+COMMENT ON TABLE signal.t_areas IS '사용자 정의 영역 정보';
+
+-- t_area_vessel_tracks: 영역별 선박 항적 (5분, 일별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_area_vessel_tracks (
+ area_id VARCHAR(50) NOT NULL,
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ point_count INTEGER,
+ metrics JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_area_vessel_tracks_pkey PRIMARY KEY (area_id, mmsi, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_area_vessel_tracks_mmsi_time ON signal.t_area_vessel_tracks (mmsi, time_bucket DESC);
+CREATE INDEX IF NOT EXISTS idx_area_vessel_tracks_area_time ON signal.t_area_vessel_tracks (area_id, time_bucket DESC);
+
+COMMENT ON TABLE signal.t_area_vessel_tracks IS '영역별 선박 항적 (5분 단위)';
+
+-- t_area_tracks_summary: 영역별 항적 요약 (5분, 일별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary (
+ area_id VARCHAR(50) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ total_vessels INTEGER,
+ total_distance_nm NUMERIC(12,2),
+ avg_speed NUMERIC(6,2),
+ vessel_list JSONB,
+ metrics_summary JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_area_tracks_summary_pkey PRIMARY KEY (area_id, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+COMMENT ON TABLE signal.t_area_tracks_summary IS '영역별 5분 단위 항적 요약 통계';
+COMMENT ON COLUMN signal.t_area_tracks_summary.vessel_list IS '선박별 상세 정보 [{mmsi, distance_nm, avg_speed}]';
+
+-- t_area_tracks_summary_hourly: 영역별 시간별 요약 (월별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary_hourly (
+ area_id VARCHAR(50) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ total_vessels INTEGER,
+ total_distance_nm NUMERIC(12,2),
+ avg_speed NUMERIC(6,2),
+ vessel_list JSONB,
+ metrics_summary JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_area_tracks_summary_hourly_pkey PRIMARY KEY (area_id, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_area_tracks_summary_hourly_time ON signal.t_area_tracks_summary_hourly (time_bucket DESC, area_id);
+
+COMMENT ON TABLE signal.t_area_tracks_summary_hourly IS '영역별 시간별 항적 요약 통계';
+
+-- t_area_tracks_summary_daily: 영역별 일별 요약 (월별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary_daily (
+ area_id VARCHAR(50) NOT NULL,
+ time_bucket DATE NOT NULL,
+ total_vessels INTEGER,
+ total_distance_nm NUMERIC(12,2),
+ avg_speed NUMERIC(6,2),
+ vessel_list JSONB,
+ metrics_summary JSONB,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_area_tracks_summary_daily_pkey PRIMARY KEY (area_id, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_area_tracks_summary_daily_time ON signal.t_area_tracks_summary_daily (time_bucket DESC, area_id);
+
+COMMENT ON TABLE signal.t_area_tracks_summary_daily IS '영역별 일일 항적 요약 통계';
+
+-- t_area_statistics: 영역별 선박 통계 (5분, 일별 파티션)
+CREATE TABLE IF NOT EXISTS signal.t_area_statistics (
+ area_id VARCHAR(50) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ vessel_count INTEGER DEFAULT 0,
+ in_count INTEGER DEFAULT 0,
+ out_count INTEGER DEFAULT 0,
+ transit_vessels JSONB,
+ stationary_vessels JSONB,
+ avg_sog NUMERIC(25,1),
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ CONSTRAINT t_area_statistics_pkey PRIMARY KEY (area_id, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+CREATE INDEX IF NOT EXISTS idx_area_stats_lookup ON signal.t_area_statistics (area_id, time_bucket DESC);
+
+COMMENT ON TABLE signal.t_area_statistics IS '영역별 5분 단위 선박 통계';
+
+-- ============================================================
+-- 5. 비정상 항적 테이블 — 파티션
+-- ============================================================
+
+-- t_abnormal_tracks: 비정상 항적 (월별 파티션)
+-- id는 GENERATED ALWAYS로 자동 생성
+CREATE TABLE IF NOT EXISTS signal.t_abnormal_tracks (
+ id BIGINT GENERATED ALWAYS AS IDENTITY,
+ mmsi VARCHAR(20) NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ track_geom GEOMETRY(LINESTRINGM, 4326),
+ abnormal_type VARCHAR(50) NOT NULL,
+ abnormal_reason JSONB NOT NULL,
+ distance_nm NUMERIC(10,2),
+ avg_speed NUMERIC(6,2),
+ max_speed NUMERIC(6,2),
+ point_count INTEGER,
+ source_table VARCHAR(50) NOT NULL,
+ detected_at TIMESTAMP DEFAULT NOW(),
+ CONSTRAINT t_abnormal_tracks_pkey PRIMARY KEY (id, time_bucket)
+) PARTITION BY RANGE (time_bucket);
+
+-- ON CONFLICT (mmsi, time_bucket, source_table) 지원
+CREATE UNIQUE INDEX IF NOT EXISTS abnormal_tracks_uk ON signal.t_abnormal_tracks (mmsi, time_bucket, source_table);
+CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_mmsi ON signal.t_abnormal_tracks (mmsi);
+CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_time ON signal.t_abnormal_tracks (time_bucket);
+CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_type ON signal.t_abnormal_tracks (abnormal_type);
+CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_geom ON signal.t_abnormal_tracks USING GIST (track_geom);
+
+COMMENT ON TABLE signal.t_abnormal_tracks IS '비정상 선박 항적';
+COMMENT ON COLUMN signal.t_abnormal_tracks.mmsi IS 'MMSI (VARCHAR)';
+COMMENT ON COLUMN signal.t_abnormal_tracks.abnormal_type IS '비정상 유형 (excessive_speed, teleport, impossible_distance, excessive_avg_speed, gap_jump)';
+COMMENT ON COLUMN signal.t_abnormal_tracks.source_table IS '검출 원본 테이블 (t_vessel_tracks_5min/hourly/daily)';
+
+-- t_abnormal_track_stats: 비정상 항적 일별 통계 (일반 테이블)
+CREATE TABLE IF NOT EXISTS signal.t_abnormal_track_stats (
+ stat_date DATE NOT NULL,
+ abnormal_type VARCHAR(50) NOT NULL,
+ vessel_count INTEGER NOT NULL,
+ track_count INTEGER NOT NULL,
+ total_points INTEGER,
+ avg_deviation NUMERIC(10,2),
+ max_deviation NUMERIC(10,2),
+ created_at TIMESTAMP DEFAULT NOW(),
+ updated_at TIMESTAMP DEFAULT NOW(),
+ CONSTRAINT t_abnormal_track_stats_pkey PRIMARY KEY (stat_date, abnormal_type)
+);
+
+CREATE INDEX IF NOT EXISTS idx_abnormal_track_stats_date ON signal.t_abnormal_track_stats (stat_date);
+
+COMMENT ON TABLE signal.t_abnormal_track_stats IS '비정상 항적 일별 통계';
+
+-- ============================================================
+-- 6. 타일 요약 테이블 — 파티션
+-- ============================================================
+
+-- t_tile_summary: 타일별 선박 요약 (5분, 일별 파티션)
+-- ON CONFLICT (tile_id, time_bucket) 지원을 위해 UNIQUE 추가
+CREATE TABLE IF NOT EXISTS signal.t_tile_summary (
+ tile_id VARCHAR(50) NOT NULL,
+ tile_level INTEGER NOT NULL,
+ time_bucket TIMESTAMP NOT NULL,
+ vessel_count INTEGER DEFAULT 0,
+ unique_vessels JSONB,
+ total_points BIGINT DEFAULT 0,
+ avg_sog NUMERIC(25,1),
+ max_sog NUMERIC(25,1),
+ vessel_density NUMERIC(10,6),
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ haegu_no INTEGER,
+ sohaegu_no INTEGER,
+ CONSTRAINT t_tile_summary_pkey PRIMARY KEY (tile_id, time_bucket, tile_level)
+) PARTITION BY RANGE (time_bucket);
+
+-- ConcurrentUpdateManager에서 ON CONFLICT (tile_id, time_bucket) 사용
+CREATE UNIQUE INDEX IF NOT EXISTS idx_tile_summary_tile_time_uk ON signal.t_tile_summary (tile_id, time_bucket);
+CREATE INDEX IF NOT EXISTS idx_tile_summary_time ON signal.t_tile_summary (time_bucket DESC);
+CREATE INDEX IF NOT EXISTS idx_tile_summary_vessel_count ON signal.t_tile_summary (vessel_count DESC);
+CREATE INDEX IF NOT EXISTS idx_tile_summary_tile_level ON signal.t_tile_summary (tile_level);
+
+COMMENT ON TABLE signal.t_tile_summary IS '타일별 5분 단위 선박 요약 통계';
+COMMENT ON COLUMN signal.t_tile_summary.unique_vessels IS '고유 선박 목록 [{mmsi}]';
+
+-- ============================================================
+-- 7. 배치 성능 메트릭 (일반 테이블)
+-- ============================================================
+
+CREATE TABLE IF NOT EXISTS signal.t_batch_performance_metrics (
+ id SERIAL PRIMARY KEY,
+ job_name VARCHAR(100) NOT NULL,
+ execution_id BIGINT NOT NULL,
+ start_time TIMESTAMP NOT NULL,
+ end_time TIMESTAMP,
+ duration_seconds BIGINT,
+ total_read BIGINT,
+ total_write BIGINT,
+ throughput_per_sec NUMERIC(10,2),
+ status VARCHAR(20),
+ error_message TEXT,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
+);
+
+CREATE INDEX IF NOT EXISTS idx_batch_metrics_job ON signal.t_batch_performance_metrics (job_name, start_time DESC);
+CREATE INDEX IF NOT EXISTS idx_batch_metrics_status ON signal.t_batch_performance_metrics (status) WHERE status != 'COMPLETED';
+
+COMMENT ON TABLE signal.t_batch_performance_metrics IS '배치 작업 성능 메트릭';
+
+-- ============================================================
+-- 8. 초기 파티션 생성 (수동 실행용)
+-- PartitionManager가 런타임에 자동 생성하지만,
+-- 최초 배포 시 수동으로 미리 생성할 수 있음.
+-- ============================================================
+
+-- 일별 파티션 생성 함수
+CREATE OR REPLACE FUNCTION signal.create_daily_partition(
+ parent_table TEXT,
+ target_date DATE
+) RETURNS VOID AS $$
+DECLARE
+ partition_name TEXT;
+ start_date DATE;
+ end_date DATE;
+BEGIN
+ partition_name := parent_table || '_' || to_char(target_date, 'YYMMDD');
+ start_date := target_date;
+ end_date := target_date + INTERVAL '1 day';
+
+ EXECUTE format(
+ 'CREATE TABLE IF NOT EXISTS signal.%I PARTITION OF signal.%I FOR VALUES FROM (%L) TO (%L)',
+ partition_name, parent_table, start_date, end_date
+ );
+END;
+$$ LANGUAGE plpgsql;
+
+-- 월별 파티션 생성 함수
+CREATE OR REPLACE FUNCTION signal.create_monthly_partition(
+ parent_table TEXT,
+ target_date DATE
+) RETURNS VOID AS $$
+DECLARE
+ partition_name TEXT;
+ start_date DATE;
+ end_date DATE;
+BEGIN
+ partition_name := parent_table || '_' || to_char(target_date, 'YYYY_MM');
+ start_date := date_trunc('month', target_date);
+ end_date := date_trunc('month', target_date) + INTERVAL '1 month';
+
+ EXECUTE format(
+ 'CREATE TABLE IF NOT EXISTS signal.%I PARTITION OF signal.%I FOR VALUES FROM (%L) TO (%L)',
+ partition_name, parent_table, start_date, end_date
+ );
+END;
+$$ LANGUAGE plpgsql;
+
+-- 현재 월 + 다음 달 파티션 일괄 생성
+DO $$
+DECLARE
+ today DATE := CURRENT_DATE;
+ day_offset INTEGER;
+ daily_tables TEXT[] := ARRAY[
+ 't_vessel_tracks_5min',
+ 't_grid_vessel_tracks',
+ 't_grid_tracks_summary',
+ 't_area_vessel_tracks',
+ 't_area_tracks_summary',
+ 't_tile_summary',
+ 't_area_statistics'
+ ];
+ monthly_tables TEXT[] := ARRAY[
+ 't_vessel_tracks_hourly',
+ 't_vessel_tracks_daily',
+ 't_grid_tracks_summary_hourly',
+ 't_grid_tracks_summary_daily',
+ 't_area_tracks_summary_hourly',
+ 't_area_tracks_summary_daily',
+ 't_abnormal_tracks'
+ ];
+ tbl TEXT;
+BEGIN
+ -- 일별 파티션: 오늘부터 7일간
+ FOREACH tbl IN ARRAY daily_tables LOOP
+ FOR day_offset IN 0..6 LOOP
+ PERFORM signal.create_daily_partition(tbl, today + day_offset);
+ END LOOP;
+ END LOOP;
+
+ -- 월별 파티션: 이번 달 + 다음 달
+ FOREACH tbl IN ARRAY monthly_tables LOOP
+ PERFORM signal.create_monthly_partition(tbl, today);
+ PERFORM signal.create_monthly_partition(tbl, (today + INTERVAL '1 month')::DATE);
+ END LOOP;
+
+ RAISE NOTICE 'Initial partitions created successfully';
+END;
+$$;
+
+-- ============================================================
+-- 9. ANALYZE (통계 수집)
+-- ============================================================
+ANALYZE signal.t_ais_position;
+ANALYZE signal.t_haegu_definitions;
+ANALYZE signal.t_grid_tiles;
+ANALYZE signal.t_areas;
+ANALYZE signal.t_abnormal_track_stats;
+ANALYZE signal.t_batch_performance_metrics;
diff --git a/sql/convert_to_unix_timestamp.sql b/sql/convert_to_unix_timestamp.sql
new file mode 100644
index 0000000..897f0ec
--- /dev/null
+++ b/sql/convert_to_unix_timestamp.sql
@@ -0,0 +1,68 @@
+-- Unix timestamp 변환 함수
+CREATE OR REPLACE FUNCTION signal.convert_to_unix_timestamp(
+ geom geometry,
+ base_time timestamp without time zone
+) RETURNS geometry AS $$
+DECLARE
+ wkt_text text;
+ points text[];
+ point_text text;
+ coords text[];
+ result_wkt text;
+ unix_base bigint;
+ relative_seconds bigint;
+ unix_time bigint;
+ i integer;
+BEGIN
+ IF geom IS NULL THEN
+ RETURN NULL;
+ END IF;
+
+ -- Unix timestamp 기준값
+ unix_base := EXTRACT(EPOCH FROM base_time AT TIME ZONE 'Asia/Seoul')::bigint;
+
+ -- WKT 텍스트 추출
+ wkt_text := ST_AsText(geom);
+
+ -- LINESTRING M(...) 에서 좌표 부분만 추출
+ wkt_text := substring(wkt_text from 'LINESTRING M\((.*)\)');
+
+ -- 각 포인트를 배열로 분리
+ points := string_to_array(wkt_text, ', ');
+
+ -- 결과 WKT 시작
+ result_wkt := 'LINESTRING M(';
+
+ -- 각 포인트 처리
+ FOR i IN 1..array_length(points, 1) LOOP
+ -- 좌표를 공백으로 분리 (lon lat m)
+ coords := string_to_array(points[i], ' ');
+
+ -- M값(상대시간 초) 추출 및 Unix timestamp로 변환
+ relative_seconds := coords[3]::bigint;
+ unix_time := unix_base + relative_seconds;
+
+ -- 결과에 추가
+ IF i > 1 THEN
+ result_wkt := result_wkt || ', ';
+ END IF;
+ result_wkt := result_wkt || coords[1] || ' ' || coords[2] || ' ' || unix_time;
+ END LOOP;
+
+ result_wkt := result_wkt || ')';
+
+ -- geometry 타입으로 변환하여 반환
+ RETURN ST_GeomFromText(result_wkt, 4326);
+END;
+$$ LANGUAGE plpgsql IMMUTABLE PARALLEL SAFE;
+
+-- 함수 테스트
+SELECT
+ sig_src_cd,
+ target_id,
+ time_bucket,
+ ST_AsText(track_geom) as original,
+ ST_AsText(signal.convert_to_unix_timestamp(track_geom, time_bucket)) as converted
+FROM signal.t_vessel_tracks_5min
+WHERE track_geom IS NOT NULL
+LIMIT 1;
diff --git a/sql/simple_update_v2.sql b/sql/simple_update_v2.sql
new file mode 100644
index 0000000..db9512d
--- /dev/null
+++ b/sql/simple_update_v2.sql
@@ -0,0 +1,42 @@
+-- hourly 테이블 직접 UPDATE (함수 없이)
+UPDATE signal.t_vessel_tracks_hourly AS h
+SET track_geom_v2 = ST_GeomFromText(
+ REPLACE(
+ REPLACE(ST_AsText(track_geom), 'LINESTRING M(',
+ 'LINESTRING M(' ||
+ CASE
+ WHEN ST_M(ST_PointN(track_geom, 1)) = 0
+ THEN EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::text
+ ELSE (EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::bigint + ST_M(ST_PointN(track_geom, 1)))::text
+ END || ' '
+ ),
+ ')',
+ EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::text || ')'
+ ),
+ 4326
+)
+WHERE time_bucket = '2025-08-07 14:00:00'
+ AND track_geom IS NOT NULL
+ AND track_geom_v2 IS NULL;
+
+-- daily 테이블 직접 UPDATE
+UPDATE signal.t_vessel_tracks_daily AS d
+SET track_geom_v2 = track_geom -- 임시로 복사 (정확한 변환은 나중에)
+WHERE time_bucket = DATE_TRUNC('day', NOW())
+ AND track_geom IS NOT NULL
+ AND track_geom_v2 IS NULL;
+
+-- 결과 확인
+SELECT
+ 'hourly' as table_type,
+ COUNT(*) as total,
+ COUNT(track_geom_v2) as v2_filled
+FROM signal.t_vessel_tracks_hourly
+WHERE time_bucket = '2025-08-07 14:00:00'
+UNION ALL
+SELECT
+ 'daily' as table_type,
+ COUNT(*) as total,
+ COUNT(track_geom_v2) as v2_filled
+FROM signal.t_vessel_tracks_daily
+WHERE time_bucket = DATE_TRUNC('day', NOW());
diff --git a/sql/update_missing_v2.sql b/sql/update_missing_v2.sql
new file mode 100644
index 0000000..061c231
--- /dev/null
+++ b/sql/update_missing_v2.sql
@@ -0,0 +1,40 @@
+-- Unix timestamp 변환을 위한 간단한 UPDATE 쿼리
+-- 5분 집계 테이블
+UPDATE signal.t_vessel_tracks_5min
+SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
+WHERE time_bucket >= NOW() - INTERVAL '2 hours'
+ AND track_geom IS NOT NULL
+ AND track_geom_v2 IS NULL;
+
+-- 1시간 집계 테이블 (오후 2시 데이터)
+UPDATE signal.t_vessel_tracks_hourly
+SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
+WHERE time_bucket = '2025-08-07 14:00:00'
+ AND track_geom IS NOT NULL
+ AND track_geom_v2 IS NULL;
+
+-- 일별 집계 테이블 (오늘 데이터)
+UPDATE signal.t_vessel_tracks_daily
+SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
+WHERE time_bucket = DATE_TRUNC('day', NOW())
+ AND track_geom IS NOT NULL
+ AND track_geom_v2 IS NULL;
+
+-- 결과 확인
+SELECT
+ 'hourly' as table_type,
+ COUNT(*) as total_records,
+ COUNT(track_geom) as v1_count,
+ COUNT(track_geom_v2) as v2_count
+FROM signal.t_vessel_tracks_hourly
+WHERE time_bucket = '2025-08-07 14:00:00'
+
+UNION ALL
+
+SELECT
+ 'daily' as table_type,
+ COUNT(*) as total_records,
+ COUNT(track_geom) as v1_count,
+ COUNT(track_geom_v2) as v2_count
+FROM signal.t_vessel_tracks_daily
+WHERE time_bucket = DATE_TRUNC('day', NOW());
diff --git a/src/main/java/gc/mda/signal_batch/BatchCommandLineRunner.java b/src/main/java/gc/mda/signal_batch/BatchCommandLineRunner.java
index ad5383d..d244d04 100644
--- a/src/main/java/gc/mda/signal_batch/BatchCommandLineRunner.java
+++ b/src/main/java/gc/mda/signal_batch/BatchCommandLineRunner.java
@@ -28,8 +28,8 @@ public class BatchCommandLineRunner implements CommandLineRunner {
private JobLauncher jobLauncher;
@Autowired
- @Qualifier("vesselAggregationJob")
- private Job vesselAggregationJob;
+ @Qualifier("vesselTrackAggregationJob")
+ private Job vesselTrackAggregationJob;
private final BatchUtils batchUtils;
@@ -48,7 +48,7 @@ public class BatchCommandLineRunner implements CommandLineRunner {
log.info("Running batch job from {} to {}", startTime, endTime);
JobParameters params = batchUtils.createJobParameters(startTime, endTime);
- JobExecution execution = jobLauncher.run(vesselAggregationJob, params);
+ JobExecution execution = jobLauncher.run(vesselTrackAggregationJob, params);
log.info("Batch job completed: {}", execution.getStatus());
} else {
diff --git a/src/main/java/gc/mda/signal_batch/batch/job/AisPositionSyncStepConfig.java b/src/main/java/gc/mda/signal_batch/batch/job/AisPositionSyncStepConfig.java
new file mode 100644
index 0000000..c029f38
--- /dev/null
+++ b/src/main/java/gc/mda/signal_batch/batch/job/AisPositionSyncStepConfig.java
@@ -0,0 +1,144 @@
+package gc.mda.signal_batch.batch.job;
+
+import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
+import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
+import lombok.extern.slf4j.Slf4j;
+import org.springframework.batch.core.Step;
+import org.springframework.batch.core.repository.JobRepository;
+import org.springframework.batch.core.step.builder.StepBuilder;
+import org.springframework.beans.factory.annotation.Qualifier;
+import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
+import org.springframework.context.annotation.Bean;
+import org.springframework.context.annotation.Configuration;
+import org.springframework.context.annotation.Profile;
+import org.springframework.jdbc.core.JdbcTemplate;
+import org.springframework.transaction.PlatformTransactionManager;
+
+import javax.sql.DataSource;
+import java.sql.Timestamp;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+/**
+ * 5분 집계 Job 편승: 캐시 스냅샷 → t_ais_position UPSERT
+ *
+ * 용도:
+ * - 서비스 재시작 시 캐시 복원 (ChnPrmShipCacheWarmer 등)
+ * - 캐시 접근 불가 타 프로세스의 최신 위치 조회
+ * - API 연결 불가 환경 대응
+ */
+@Slf4j
+@Configuration
+@Profile("!query")
+@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
+public class AisPositionSyncStepConfig {
+
+ private final JobRepository jobRepository;
+ private final DataSource queryDataSource;
+ private final PlatformTransactionManager transactionManager;
+ private final AisTargetCacheManager cacheManager;
+
+ public AisPositionSyncStepConfig(
+ JobRepository jobRepository,
+ @Qualifier("queryDataSource") DataSource queryDataSource,
+ @Qualifier("queryTransactionManager") PlatformTransactionManager transactionManager,
+ AisTargetCacheManager cacheManager) {
+ this.jobRepository = jobRepository;
+ this.queryDataSource = queryDataSource;
+ this.transactionManager = transactionManager;
+ this.cacheManager = cacheManager;
+ }
+
+ @Bean
+ public Step aisPositionSyncStep() {
+ return new StepBuilder("aisPositionSyncStep", jobRepository)
+ .tasklet((contribution, chunkContext) -> {
+ Collection entities = cacheManager.getAllValues();
+
+ if (entities.isEmpty()) {
+ log.debug("캐시에 데이터 없음 — t_ais_position 동기화 스킵");
+ return org.springframework.batch.repeat.RepeatStatus.FINISHED;
+ }
+
+ JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
+
+ String sql = """
+ INSERT INTO signal.t_ais_position (
+ mmsi, imo, name, callsign, vessel_type, extra_info,
+ lat, lon, geom,
+ heading, sog, cog, rot,
+ length, width, draught,
+ destination, eta, status,
+ message_timestamp, signal_kind_code, class_type,
+ last_update
+ ) VALUES (
+ ?, ?, ?, ?, ?, ?,
+ ?, ?, public.ST_SetSRID(public.ST_MakePoint(?, ?), 4326),
+ ?, ?, ?, ?,
+ ?, ?, ?,
+ ?, ?, ?,
+ ?, ?, ?,
+ NOW()
+ )
+ ON CONFLICT (mmsi) DO UPDATE SET
+ imo = EXCLUDED.imo,
+ name = EXCLUDED.name,
+ callsign = EXCLUDED.callsign,
+ vessel_type = EXCLUDED.vessel_type,
+ extra_info = EXCLUDED.extra_info,
+ lat = EXCLUDED.lat,
+ lon = EXCLUDED.lon,
+ geom = EXCLUDED.geom,
+ heading = EXCLUDED.heading,
+ sog = EXCLUDED.sog,
+ cog = EXCLUDED.cog,
+ rot = EXCLUDED.rot,
+ length = EXCLUDED.length,
+ width = EXCLUDED.width,
+ draught = EXCLUDED.draught,
+ destination = EXCLUDED.destination,
+ eta = EXCLUDED.eta,
+ status = EXCLUDED.status,
+ message_timestamp = EXCLUDED.message_timestamp,
+ signal_kind_code = EXCLUDED.signal_kind_code,
+ class_type = EXCLUDED.class_type,
+ last_update = NOW()
+ """;
+
+ List