refactor: SNP API 전환 및 레거시 코드 전면 정리

- CollectDB 다중 신호 수집 → S&P Global AIS API 단일 수집으로 전환
- sig_src_cd + target_id 이중 식별자 → mmsi(VARCHAR) 단일 식별자
- t_vessel_latest_position → t_ais_position 테이블 전환
- 레거시 배치/유틸 ~30개 클래스 삭제 (VesselAggregationJobConfig, ShipKindCodeConverter 등)
- AisTargetCacheManager 기반 캐시 이중 구조 (최신위치 + 트랙 버퍼)
- CacheBasedVesselTrackDataReader + CacheBasedTrackJobListener 신규 추가
- VesselStaticStepConfig: 정적정보 CDC 변경 검출 + hourly job 편승
- SignalKindCode enum: vesselType/extraInfo 기반 선종 자동 분류
- WebSocket/STOMP 전체 mmsi 전환 (StompTrackStreamingService ~40곳)
- 모니터링/성능 최적화 코드 mmsi 기반 전환
- DataSource 설정 통합 (snpdb 단일 DB)
- AreaBoundaryCache Polygon→Geometry 캐스트 수정 (MULTIPOLYGON 지원)
- ConcurrentHashMap 적용 (VesselTrackStepConfig 동시성 버그 수정)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
htlee 2026-02-19 09:59:49 +09:00
부모 a98fdcbdc9
커밋 2e9361ee58
175개의 변경된 파일12886개의 추가작업 그리고 7638개의 파일을 삭제

파일 보기

@ -0,0 +1,70 @@
# /analyze-batch - 배치 작업 분석
Spring Batch 작업 관련 코드를 분석하고 진단합니다.
## 분석 대상
### 1. Job 구성 분석
다음 파일들을 확인하세요:
- `src/main/java/**/config/` - 배치 설정
- `src/main/java/**/job/` - Job 정의
- Job, Step, Reader, Processor, Writer 구성
### 2. 스케줄링 설정
- @Scheduled 어노테이션 사용 현황
- Quartz 또는 다른 스케줄러 설정
- Cron 표현식 분석
### 3. 데이터 처리 패턴
- ItemReader 구현 (DB, File, API 등)
- ItemProcessor 로직
- ItemWriter 구현 (bulk insert, 파일 출력 등)
- Chunk 크기 설정
### 4. 에러 처리
- Skip 정책
- Retry 정책
- Listener 구현 (JobExecutionListener, StepExecutionListener)
### 5. 성능 분석
- Chunk 크기 적절성
- 병렬 처리 설정 (Partitioning, Multi-threading)
- 커넥션 풀 설정
## 출력 형식
```markdown
## 배치 작업 분석 결과
### Job 목록
| Job 이름 | Step 수 | 스케줄 | 설명 |
|----------|---------|--------|------|
| xxxJob | 3 | 0 0 * * * | ... |
### 데이터 흐름
```
[Reader] → [Processor] → [Writer]
↓ ↓ ↓
[데이터소스] [변환로직] [목적지]
```
### 에러 처리 설정
- Skip 정책: [설정 내용]
- Retry 정책: [설정 내용]
### 성능 설정
- Chunk 크기: [값]
- 병렬 처리: [설정 여부]
### 개선 제안
1. [제안1]
2. [제안2]
```
## 인자
`$ARGUMENTS`: 특정 Job 이름이나 키워드
예시:
- `/analyze-batch` - 전체 분석
- `/analyze-batch signal` - 신호 관련 배치만 분석

파일 보기

@ -0,0 +1,64 @@
# /build-check - 빌드 및 테스트 체크
Maven 프로젝트의 빌드 상태와 테스트 결과를 점검합니다.
## 실행 작업
### 1. 컴파일 체크
```bash
mvn clean compile -DskipTests
```
- 컴파일 에러 확인
- 의존성 문제 확인
### 2. 테스트 실행 (선택적)
```bash
mvn test
```
- 단위 테스트 결과
- 실패한 테스트 분석
### 3. 패키지 빌드 (선택적)
```bash
mvn package -DskipTests
```
- JAR/WAR 생성 확인
- 빌드 아티팩트 확인
## 출력 형식
```markdown
## Build Check 결과
### 컴파일
- 상태: [성공/실패]
- 에러 (있다면): [에러 내용]
### 테스트
- 상태: [성공/실패/스킵]
- 통과: [N]개
- 실패: [N]개
- 실패한 테스트 (있다면):
- [테스트명]: [실패 원인]
### 패키지
- 상태: [성공/실패/스킵]
- 아티팩트: [파일 경로]
### 권장 조치
1. [조치1]
2. [조치2]
```
## 인자
`$ARGUMENTS`: 옵션 지정
- `compile` - 컴파일만
- `test` - 컴파일 + 테스트
- `package` - 전체 패키지 빌드
- (없음) - 컴파일만 (기본값)
예시:
- `/build-check` - 컴파일 체크
- `/build-check test` - 테스트 포함
- `/build-check package` - 전체 빌드

파일 보기

@ -0,0 +1,66 @@
# /clarify - 요구사항 명확화
새로운 기능이나 버그 수정 요청 시 요구사항을 명확히 하기 위한 질문을 생성합니다.
## 사용 시점
- 사용자 요청이 모호할 때
- 여러 구현 방법이 가능할 때
- 비즈니스 요구사항 확인이 필요할 때
## 질문 카테고리
### 1. 기능 범위
- 이 기능의 정확한 범위는 무엇인가요?
- 어떤 서비스/컴포넌트가 이 기능을 사용하나요?
- 기존 기능과의 관계는 어떻게 되나요?
### 2. API 설계
- REST API 엔드포인트 설계가 필요한가요?
- 요청/응답 형식은 어떻게 되나요?
- 기존 API 패턴을 따르나요?
### 3. 데이터
- 어떤 데이터가 필요한가요?
- 데이터 소스는 무엇인가요? (DB, 외부 API, 파일)
- 데이터 영속성이 필요한가요?
### 4. 에러 처리
- 예상되는 에러 케이스는 무엇인가요?
- 에러 시 어떻게 처리해야 하나요? (재시도, 로깅, 알림)
### 5. 성능
- 예상 데이터 양은 얼마나 되나요?
- 배치 처리가 필요한가요?
- 성능 요구사항이 있나요?
### 6. 배포/환경
- 특정 환경(dev/qa/prod)에서만 동작해야 하나요?
- 프로파일별 설정이 필요한가요?
## 출력 형식
```markdown
## 요구사항 명확화 질문
### 기능 범위
1. [질문1]
2. [질문2]
### API 설계
1. [질문1]
### 데이터
1. [질문1]
...
---
답변을 바탕으로 구현 계획을 수립하겠습니다.
```
## 인자
`$ARGUMENTS`: 사용자의 요청 내용을 요약해서 입력
예: `/clarify 선박 위치 배치 저장 기능`

파일 보기

@ -0,0 +1,72 @@
# /perf-check - 성능 체크 명령어
Spring Boot 배치 애플리케이션의 성능 관련 이슈를 점검합니다.
## 분석 영역
### 1. 데이터베이스 성능
- JPA/MyBatis 쿼리 분석
- N+1 문제 확인
- 인덱스 활용 여부
- Batch Insert/Update 적용 여부
### 2. 메모리 관리
- 대량 데이터 처리 시 메모리 사용
- Stream 활용 여부
- 페이징 처리 적용 여부
### 3. 배치 처리
- Chunk 크기 적절성
- 병렬 처리 설정
- Reader/Writer 최적화
### 4. 커넥션 관리
- 커넥션 풀 설정 (HikariCP)
- 트랜잭션 범위 적절성
- 커넥션 누수 가능성
### 5. 외부 통신
- HTTP Client 설정 (타임아웃, 커넥션 풀)
- 재시도 정책
- Circuit Breaker 패턴 적용
## 출력 형식
```markdown
## 성능 체크 결과
### 데이터베이스
- [ ] N+1 문제: [발견 여부]
- [ ] Batch 처리: [적용 현황]
- [ ] 인덱스 활용: [상태]
### 메모리
- [ ] 대량 데이터 처리: [상태]
- [ ] Stream 활용: [적용 여부]
- [ ] 페이징: [적용 여부]
### 배치 처리
- [ ] Chunk 크기: [값 및 적절성]
- [ ] 병렬 처리: [설정 상태]
### 커넥션 관리
- [ ] 풀 설정: [상태]
- [ ] 트랜잭션 범위: [적절성]
### 외부 통신
- [ ] 타임아웃 설정: [상태]
- [ ] 재시도 정책: [적용 여부]
### 우선순위 개선 항목
1. [항목1] - 예상 효과: [설명]
2. [항목2] - 예상 효과: [설명]
```
## 인자
`$ARGUMENTS`: 특정 영역만 체크 (db, memory, batch, connection, external)
예시:
- `/perf-check` - 전체 체크
- `/perf-check db` - 데이터베이스만 체크
- `/perf-check batch` - 배치 처리만 체크

65
.claude/commands/wrap.md Normal file
파일 보기

@ -0,0 +1,65 @@
# /wrap - Session Wrap-up Command
세션 종료 시 다음 작업들을 병렬로 수행하는 명령어입니다.
## 실행할 작업들 (병렬 에이전트)
### 1. 문서 업데이트 체크
다음 파일들의 업데이트 필요 여부를 확인하세요:
- `CLAUDE.md`: 새로운 패턴이나 컨벤션이 발견되었는지
- 이번 세션에서 중요한 기술 결정이 있었는지
### 2. 반복 패턴 분석
이번 세션에서 반복적으로 수행한 작업이 있는지 분석하세요:
- 비슷한 코드 패턴을 여러 번 작성했는지
- 동일한 명령어를 반복 실행했는지
- 자동화할 수 있는 워크플로우가 있는지
발견된 패턴은 `/commands`로 자동화를 제안하세요.
### 3. 학습 내용 추출
이번 세션에서 배운 내용을 정리하세요:
- 새로 발견한 코드베이스의 특성
- 해결한 문제와 그 해결 방법
- 앞으로 주의해야 할 점
### 4. 미완성 작업 정리
완료하지 못한 작업이 있다면 정리하세요:
- TODO 리스트에 남은 항목
- 다음 세션에서 계속해야 할 작업
- 블로커나 의존성 이슈
### 5. 코드 품질 체크
이번 세션에서 수정한 파일들에 대해:
- 컴파일 에러가 없는지 (`mvn compile`)
- 테스트가 통과하는지 (`mvn test`)
## 출력 형식
```markdown
## Session Summary
### 완료한 작업
- [작업1]
- [작업2]
### 문서 업데이트 필요
- [ ] CLAUDE.md: [업데이트 내용]
### 발견된 패턴 (자동화 제안)
- [패턴]: [자동화 방법]
### 학습 내용
- [내용1]
- [내용2]
### 미완성 작업
- [ ] [작업1]
- [ ] [작업2]
### 코드 품질
- Compile: [결과]
- Test: [결과]
```
이 명령어를 실행할 때 Task 도구를 사용하여 여러 에이전트를 **병렬로** 실행하세요.

파일 보기

@ -44,7 +44,21 @@
- `@Builder` 허용
- `@Data` 사용 금지 (명시적으로 필요한 어노테이션만)
- `@AllArgsConstructor` 단독 사용 금지 (`@Builder`와 함께 사용)
- `@Slf4j` 로거 사용
## 로깅
- `@Slf4j` (Lombok) 로거 사용
- SLF4J `{}` 플레이스홀더에 printf 포맷 사용 금지 (`{:.1f}`, `{:d}`, `{%s}` 등)
- 숫자 포맷이 필요하면 `String.format()`으로 변환 후 전달
```java
// 잘못됨
log.info("처리율: {:.1f}%", rate);
// 올바름
log.info("처리율: {}%", String.format("%.1f", rate));
```
- 예외 로깅 시 예외 객체는 마지막 인자로 전달 (플레이스홀더 불필요)
```java
log.error("처리 실패: {}", id, exception);
```
## 예외 처리
- 비즈니스 예외는 커스텀 Exception 클래스 정의

3
.sdkmanrc Normal file
파일 보기

@ -0,0 +1,3 @@
# Enable auto-env through SDKMAN config
# Run 'sdk env' in this directory to switch versions
java=17.0.18-amzn

파일 보기

@ -0,0 +1,314 @@
# 일일 캐시 성능 벤치마크 보고서
## 선박 항적 리플레이 서비스 — 캐시 vs DB 정량 비교
| 항목 | 내용 |
|------|------|
| 측정일 | 2026-02-07 |
| 대상 시스템 | Signal Batch — ChunkedTrackStreamingService (WebSocket 스트리밍) |
| 운영 환경 | prod 프로파일, Query DB 커넥션 풀 180 |
| 캐시 구성 | DailyTrackCacheManager — D-1 ~ D-7 인메모리 캐시, STRtree 공간 인덱스 |
| 측정 방식 | QueryBenchmark 내부 클래스 → `cache-benchmark.log` JSON 기록 |
| 샘플 수 | 12건 (CACHE 3, DB 2, HYBRID 5, CACHE+Today 2) |
---
## 1. 측정 경로 분류
쿼리 시간 범위에 따라 4가지 경로로 처리된다.
| 경로 | 설명 | 데이터 소스 |
|------|------|------------|
| **CACHE** | 요청 일자 전체가 인메모리 캐시에 존재 | 메모리 |
| **DB** | 캐시 미스 — Daily 테이블 직접 조회 | DB |
| **HYBRID** | 캐시 히트 일자 + 캐시 범위 밖 일자 DB 조회 | 메모리 + DB |
| **CACHE+Today** | 캐시 히트 + 오늘 데이터(Hourly/5min 테이블) | 메모리 + DB |
### 오늘 데이터 구간 구조
오늘(D-0) 데이터는 캐시 대상이 아니며, 시간 경과에 따라 두 테이블로 분할 조회된다.
```
오늘 00:00 ~ 12:00 12:00 ~ 12:35 현재(12:40)
├──── Hourly 테이블 조회 ──────┤── 5min 조회 ──┤
(12개 범위, 1시간 단위) (7개 범위, 5분 단위)
```
- **Hourly**: 자정부터 약 1시간 전까지 → 시간 단위 범위 (약 12개)
- **5min**: 최근 약 1시간 이내 → 5분 단위 범위 (약 7개)
- 각 범위마다 DB 커넥션 1회 + Viewport Pass1 1회 발생 → 오늘 구간 커넥션 = 범위 수 × 2
---
## 2. 전체 측정 데이터
### 2.1 요약 테이블
| # | 경로 | Zoom | 일수 | 캐시/DB | 선박 수 | 트랙 수 | 응답시간(ms) | DB커넥션 | DB쿼리시간(ms) |
|---|------|------|------|---------|---------|---------|-------------|----------|---------------|
| 1 | CACHE | 10 | 3 | 3/0 | 443 | 986 | **575** | 3 | 0 |
| 2 | DB | 10 | 2 | 0/2 | 352 | 587 | **7,221** | 8 | 3,475 |
| 3 | DB | 10 | 2 | 0/2 | 12,253 | 18,502 | **8,195** | 19 | 1,443 |
| 4 | CACHE | 10 | 2 | 2/0 | 10,690 | 16,942 | **1,439** | 2 | 0 |
| 5 | CACHE | 10 | 2 | 2/0 | 10,690 | 16,942 | **1,374** | 2 | 0 |
| 6 | HYBRID | 8 | 5 | 3/2 | 9,958 | 29,362 | **8,900** | 16 | 3,301 |
| 7 | HYBRID | 9 | 5 | 3/2 | 547 | 1,927 | **1,373** | 11 | 550 |
| 8 | HYBRID | 8 | 5 | 3/2 | 4,589 | 12,422 | **2,910** | 12 | 715 |
| 9 | HYBRID | 8 | 5 | 3/2 | 5,760 | 23,283 | **3,651** | 15 | 1,048 |
| 10 | CACHE+Today | 10 | 3+오늘 | 3/0 | 105 | 301 | **6,091** | 56 | 0 |
| 11 | HYBRID | 8 | 5 | 3/2 | 52,151 | 162,849 | **105,212** | 45 | 93,319 |
| 12 | CACHE+Today | 12 | 3+오늘 | 3/0 | 6,990 | 17,024 | **9,744** | 56 | 0 |
### 2.2 DB 커넥션 세분화
| # | 경로 | 합계 | Viewport Pass1 | Daily Pages | Hourly/5min | TableCheck |
|---|------|------|----------------|-------------|-------------|------------|
| 1 | CACHE | 3 | 0 | 0 | 0 | **3** |
| 2 | DB | 8 | 2 | 2 | 0 | 2 |
| 3 | DB | 19 | 2 | 2 | 0 | 2 |
| 4 | CACHE | 2 | 0 | 0 | 0 | **2** |
| 5 | CACHE | 2 | 0 | 0 | 0 | **2** |
| 6 | HYBRID | 16 | 2 | 2 | 0 | 5 |
| 7 | HYBRID | 11 | 2 | 2 | 0 | 5 |
| 8 | HYBRID | 12 | 2 | 2 | 0 | 5 |
| 9 | HYBRID | 15 | 2 | 2 | 0 | 5 |
| 10 | CACHE+Today | 56 | **21** | 0 | **21** | **14** |
| 11 | HYBRID | 45 | 2 | **6** | 0 | 5 |
| 12 | CACHE+Today | 56 | **21** | 0 | **21** | **14** |
> 합산 검증: 전 12건 모두 세분화 카운터 합 = 합계 일치 확인 (VesselInfo 카운터 포함, 표에서는 생략).
**CACHE+Today (#10, #12) 커넥션 56건 내역**:
- Hourly/5min 21건: 오늘 00:00~현재 구간 (Hourly 약 12건 + 5min 약 7건 + 폴백)
- Viewport Pass1 21건: 동일 범위에 대한 뷰포트 교차 선박 수집 (범위당 1회)
- TableCheck 14건: Daily 3건 + Hourly/5min 존재 확인 약 11건
### 2.3 캐시 경로 간소화 지표
캐시 경로에서는 원본 데이터를 메모리에 보유하므로 간소화 전/후를 측정할 수 있다.
| # | 경로 | Zoom | 원본 포인트 | 간소화 후 | 압축률 | 간소화 시간(ms) | 배치 감소 |
|---|------|------|------------|----------|--------|----------------|-----------|
| 1 | CACHE | 10 | 1,083,566 | 11,212 | 99% | 133 | 50→3 (94%) |
| 4 | CACHE | 10 | 13,502,970 | 172,066 | 99% | 1,075 | 602→10 (98%) |
| 5 | CACHE | 10 | 13,502,970 | 172,066 | 99% | 981 | 602→10 (98%) |
| 6 | HYBRID | 8 | 7,582,515 | 152,734 | 98% | 500 | 335→12 (96%) |
| 7 | HYBRID | 9 | 1,049,434 | 11,634 | 99% | 74 | 50→5 (90%) |
| 8 | HYBRID | 8 | 1,618,310 | 61,434 | 96% | 125 | 72→5 (93%) |
| 9 | HYBRID | 8 | 3,202,500 | 155,633 | 95% | 277 | 137→12 (91%) |
| 10 | CACHE+Today | 10 | 355,256 | 4,159 | 99% | 24 | 17→6 (65%) |
| 11 | HYBRID | 8 | 41,634,918 | 732,470 | 98% | 2,411 | 1,813→42 (98%) |
| 12 | CACHE+Today | 12 | 14,404,225 | 259,541 | 98% | 1,258 | 639→23 (96%) |
> DB 경로(#2, #3)는 SQL 레벨에서 `ST_Simplify` 적용 후 수신하므로 앱 레벨 압축률 산출 불가 (before = after).
---
## 3. 경로별 정량 비교
### 3.1 CACHE vs DB — 동일 규모 직접 비교
#### 대규모: #4 CACHE vs #3 DB
| 지표 | DB (#3) | CACHE (#4) | 개선 |
|------|---------|------------|------|
| 선박 수 | 12,253 | 10,690 | (유사 규모) |
| **응답시간** | 8,195 ms | 1,439 ms | **5.7배 빨라짐** |
| **DB 커넥션** | 19 | 2 | **89% 감소** |
| DB 쿼리 시간 | 1,443 ms | 0 ms | **100% 절감** |
| 배치 전송 수 | 11 | 10 | 유사 |
#### 소규모: #2 DB vs #1 CACHE
| 지표 | DB (#2) | CACHE (#1) | 개선 |
|------|---------|------------|------|
| 선박 수 | 352 | 443 | (유사 규모) |
| **응답시간** | 7,221 ms | 575 ms | **12.6배 빨라짐** |
| **DB 커넥션** | 8 | 3 | **63% 감소** |
| DB 쿼리 시간 | 3,475 ms | 0 ms | **100% 절감** |
| 배치 전송 수 | 2 | 3 | 유사 |
### 3.2 HYBRID 경로 — 규모별 성능 변화
5일 범위 쿼리 (캐시 3일 + DB 2일):
| # | 선박 수 | 응답시간 | DB커넥션 | DB쿼리시간 |
|---|---------|---------|----------|-----------|
| 7 | 547 | 1,373 ms | 11 | 550 ms |
| 8 | 4,589 | 2,910 ms | 12 | 715 ms |
| 9 | 5,760 | 3,651 ms | 15 | 1,048 ms |
| 6 | 9,958 | 8,900 ms | 16 | 3,301 ms |
| 11 | 52,151 | 105,212 ms | 45 | 93,319 ms |
- 소규모(~500척): 캐시 일자가 대부분의 처리를 흡수하여 **1.4초** 수준으로 응답.
- 중규모(5K~10K척): DB 쿼리 부담 증가하나 캐시 일자가 완충하여 **3~9초** 수준.
- 대규모(52K척): 캐시 미스 일자의 데이터량이 크면 DB 의존도가 높아져 **100초+** 수준.
- 캐시 적용 일수가 많을수록(현재 3/5일 = 60%) HYBRID 경로의 DB 부담이 경감된다.
### 3.3 CACHE+Today 경로 — 오늘 데이터 포함 쿼리
| # | Zoom | 선박 수 | 응답시간 | DB커넥션 | 오늘 구간 커넥션 |
|---|------|---------|---------|----------|----------------|
| 10 | 10 | 105 | 6,091 ms | 56 | 42 (H5m 21 + VP 21) |
| 12 | 12 | 6,990 | 9,744 ms | 56 | 42 (H5m 21 + VP 21) |
**핵심 발견**:
- 두 쿼리 모두 동일한 시간 범위(3일+오늘)이므로 커넥션 구조가 동일하며, 뷰포트 크기만 다름.
- 오늘 구간(00:00~현재)만으로 **42건의 DB 커넥션**이 발생하여, 순수 CACHE 경로(2~3건)와 큰 차이를 보인다.
- 선박 수가 적은 #10(105척)도 6초가 소요되며, 이는 오늘 구간의 범위별 개별 커넥션 오버헤드가 원인이다.
### 3.4 줌 레벨별 간소화 효과
| Zoom | 대표 # | 원본 포인트 | 간소화 후 | 압축률 | 선박당 평균 포인트 |
|------|--------|------------|----------|--------|------------------|
| 8 | #6 | 7,582,515 | 152,734 | 98% | 15.3 |
| 9 | #7 | 1,049,434 | 11,634 | 99% | 21.3 |
| 10 | #4 | 13,502,970 | 172,066 | 99% | 16.1 |
| 12 | #12 | 14,404,225 | 259,541 | 98% | 37.1 |
- 줌 8~10: 선박당 15~21 포인트로 압축 — 해역 수준 조회에 최적.
- 줌 12: 선박당 37 포인트 — 항만 수준 상세 조회에서 더 많은 포인트를 유지.
- 전 줌 레벨에서 95~99% 압축률 달성.
---
## 4. DB 커넥션 구성 분석
### 4.1 경로별 커넥션 구성 패턴
```
CACHE (순수) [==TC==] 2~3건
TableCheck만 발생
DB (순수) [VP][DA][..기타..][TC] 8~19건
각 항목 균등 분포
HYBRID [VP][DA][..기타..........][TC---] 11~45건
규모에 비례 증가
CACHE+Today [VP----------][H5m---------][TC------] 56건
오늘 구간의 Hourly/5min + Viewport가 대부분
```
### 4.2 커넥션 풀 영향 분석
Query DataSource 커넥션 풀 180 기준:
| 경로 | 쿼리당 사용 | 동시 10쿼리 시 누적 | 풀 압박 수준 |
|------|------------|-------------------|------------|
| CACHE | 2~3 | 30 | 매우 낮음 (17%) |
| HYBRID (소규모) | 11~15 | 150 | 보통 (83%) |
| DB | 8~19 | 190 | 보통~높음 |
| CACHE+Today | 56 | 560 | 높음 |
> 커넥션은 순간 점유가 아닌 순차 사용이므로 실제 동시 점유 수는 위 수치보다 작다. 캐시 적용으로 전체 쿼리 중 CACHE 경로 비율이 높아지면 풀 전체 부담이 크게 감소한다.
---
## 5. 종합 성능 비교
### 5.1 핵심 개선 지표
| 지표 | DB 경로 | CACHE 경로 | 개선율 |
|------|---------|------------|--------|
| 응답시간 (대규모, 만 척 이상) | 8,195 ms | 1,439 ms | **5.7배** |
| 응답시간 (소규모, 수백 척) | 7,221 ms | 575 ms | **12.6배** |
| DB 커넥션 수 (대규모) | 19건 | 2건 | **89% 감소** |
| DB 커넥션 수 (소규모) | 8건 | 3건 | **63% 감소** |
| DB 쿼리 시간 | 1,443~3,475 ms | 0 ms | **100% 절감** |
| 포인트 간소화 | SQL ST_Simplify | 앱 레벨 95~99% | 캐시만 측정 가능 |
### 5.2 경로별 응답시간 분포
```
응답시간 (ms, 로그 스케일 아님)
경로 0 2,000 4,000 6,000 8,000 10,000
CACHE (순수) |█| 575~1,439
HYBRID (소규모) |██| 1,373
HYBRID (중규모) |█████| 2,910~3,651
CACHE+Today |████████████| 6,091~9,744
DB (순수) |████████████████| 7,221~8,195
HYBRID (대규모) |██████████████████| 8,900
```
> HYBRID 대규모(#11, 52K척, 105초)는 스케일 초과로 표시 생략.
### 5.3 캐시 적용에 따른 운영 시나리오별 예측
D-1 ~ D-7 캐시가 적용된 상태에서:
| 사용 패턴 | 예상 경로 | 예상 응답시간 | DB 커넥션 |
|----------|----------|-------------|----------|
| 과거 1~7일만 조회 | CACHE | **0.5~1.5초** | 2~3건 |
| 과거 수일 + 오늘 | CACHE+Today | 6~10초 | ~56건 |
| 7일 이전 과거 포함 | HYBRID / DB | 1~9초 (규모 의존) | 8~45건 |
---
## 6. 캐시 범위 확장 시 권장 구성
현재 D-1 ~ D-7 캐시 구성에서 조회 기간 범위를 확장하고자 할 경우, 아래 구성을 권장한다.
### 6.1 현재 구성
```yaml
cache:
daily-track:
enabled: true
retention-days: 7 # D-1 ~ D-7 캐시
max-memory-gb: 6 # 최대 메모리 사용량
warmup-async: true # 비동기 워밍업
```
- 7일 이내 과거 조회: CACHE 경로 (0.5~1.5초)
- 7일 초과 과거 포함: HYBRID/DB 경로로 폴백
### 6.2 확장 권장안
| 시나리오 | retention-days | max-memory-gb | 예상 효과 |
|----------|---------------|---------------|----------|
| **현재** | 7 | 6 | 1주일 이내 CACHE, 이후 DB |
| **2주 확장** | 14 | 12 | 2주 리플레이까지 CACHE 커버 |
| **1개월 확장** | 30 | 25 | 월간 분석 조회까지 CACHE 커버 |
**확장 시 고려사항**:
1. **메모리 산정**: 현재 7일 캐시 ≈ 4GB 기준, 선형 증가 추정.
- 14일: ~12GB, 30일: ~25GB
- 서버 가용 메모리와 JVM 힙 설정(`-Xmx`) 여유 확인 필요.
2. **워밍업 시간**: retention-days 증가에 비례하여 초기 로드 시간 증가.
- 7일: 약 1~2분, 14일: 약 2~4분, 30일: 약 5~10분 (비동기이므로 서비스 가용성 영향 없음)
3. **HYBRID 비율 감소**: retention-days 확장 시 DB 폴백 빈도가 줄어, HYBRID 경로가 줄고 순수 CACHE 경로 비율이 증가한다. 이는 DB 커넥션 풀 부담 경감에 직접 기여한다.
4. **CACHE+Today 경로는 retention-days와 무관**: 오늘(D-0) 데이터는 항상 Hourly/5min 테이블에서 DB 조회한다. 이 구간의 커넥션 최적화는 별도 과제이다.
### 6.3 단계적 확장 전략
```
Phase 1 (현재) : retention-days=7, max-memory-gb=6 → 1주 커버
Phase 2 (권장) : retention-days=14, max-memory-gb=12 → 2주 커버, 주간 비교 분석 지원
Phase 3 (선택) : retention-days=30, max-memory-gb=25 → 월간 커버, 장기 항적 분석 지원
```
각 단계 전환 시 서버 메모리 여유와 워밍업 시간을 모니터링하며, JVM 힙 설정을 함께 조정한다.
---
## 7. 결론
### 7.1 캐시 효과 확인
1. **응답시간**: 순수 CACHE 경로에서 DB 대비 **5.7~12.6배** 빨라짐 확인.
2. **DB 커넥션**: 순수 CACHE 경로에서 DB 대비 **63~89%** 감소 확인.
3. **간소화**: 캐시 경로에서 줌 레벨에 따라 **95~99%** 포인트 압축, 배치 전송 수 **90~98%** 감소.
4. **DB 쿼리 시간**: CACHE 경로에서 **0ms** — DB 부하 완전 제거.
### 7.2 운영 권장사항
| 항목 | 현황 | 권장 방향 |
|------|------|----------|
| 캐시 보존 기간 | 7일 | 사용 패턴에 따라 14~30일로 확장 검토 |
| CACHE+Today 커넥션 | 오늘 구간 범위별 개별 DB 커넥션 (56건) | 오늘 데이터 범위 병합 또는 별도 캐시 검토 |

파일 보기

@ -0,0 +1,102 @@
# 일일 캐시 성능 개선 요약보고서
| 항목 | 내용 |
|------|------|
| 측정일 | 2026-02-07 |
| 대상 | 선박 항적 리플레이 서비스 (WebSocket 스트리밍) |
| 개선 내용 | 일일(Daily) 집계 데이터 7일분 인메모리 캐시 적용 |
| 측정 건수 | 12건 (CACHE 3, DB 2, HYBRID 5, CACHE+Today 2) |
---
## 1. 핵심 성능 개선 지표
| 지표 | DB 경로 (개선 전) | CACHE 경로 (개선 후) | 개선율 |
|------|-------------------|---------------------|--------|
| **응답시간** (만 척 이상) | 8.2초 | 1.4초 | **5.7배 단축** |
| **응답시간** (수백 척) | 7.2초 | 0.6초 | **12.6배 단축** |
| **DB 커넥션** (만 척 이상) | 19건 | 2건 | **89% 감소** |
| **DB 커넥션** (수백 척) | 8건 | 3건 | **63% 감소** |
| **DB 쿼리 시간** | 1.4 ~ 3.5초 | 0초 | **100% 절감** |
| **포인트 압축률** | SQL 처리 | 앱 레벨 95 ~ 99% | 동등 품질 유지 |
---
## 2. 경로별 응답시간 비교
```
경로 응답시간
CACHE (순수) ██ 0.6 ~ 1.4초
HYBRID (소규모) ██ 1.4초
HYBRID (중규모) █████ 2.9 ~ 3.7초
CACHE+Today ████████████ 6.1 ~ 9.7초
DB (순수) ████████████████ 7.2 ~ 8.2초
```
- **CACHE**: 캐시 범위 내 과거 데이터만 조회 시, 가장 빠른 응답
- **HYBRID**: 캐시 + DB 병합 — 캐시 비율이 높을수록 DB 부담 경감
- **CACHE+Today**: 오늘 데이터 포함 시, Hourly/5min 테이블 개별 조회로 커넥션 다수 발생
---
## 3. DB 커넥션 풀 부담 변화
Query DataSource 커넥션 풀 180 기준:
| 경로 | 쿼리당 커넥션 | 동시 10쿼리 | 풀 사용률 |
|------|-------------|------------|----------|
| CACHE | 2 ~ 3 | ~30 | **17%** (여유) |
| HYBRID (소규모) | 11 ~ 15 | ~150 | 83% |
| DB | 8 ~ 19 | ~190 | 100%+ |
> 캐시 적용으로 전체 쿼리 중 CACHE 경로 비율이 높아지면, DB 커넥션 풀 전체 부담이 크게 감소한다.
---
## 4. 간소화 파이프라인 효과
캐시 경로에서 원본 데이터 → 3단계 간소화(Douglas-Peucker + 거리/시간 샘플링 + 줌 레벨 샘플링) 적용:
| 줌 레벨 | 원본 포인트 | 간소화 후 | 압축률 | 선박당 평균 |
|---------|------------|----------|--------|-----------|
| 8 | 7.6M | 153K | 98% | 15 포인트 |
| 9 | 1.0M | 12K | 99% | 21 포인트 |
| 10 | 13.5M | 172K | 99% | 16 포인트 |
| 12 | 14.4M | 260K | 98% | 37 포인트 |
- 간소화 CPU 시간: 24ms ~ 1,258ms (DB 대기 없이 순수 CPU 연산)
- 전 줌 레벨에서 95 ~ 99% 데이터 압축 달성
---
## 5. 운영 시나리오별 예상 성능
| 사용 패턴 | 예상 경로 | 예상 응답시간 | DB 커넥션 |
|----------|----------|-------------|----------|
| 과거 1~7일만 조회 | CACHE | **0.6 ~ 1.4초** | 2~3건 |
| 과거 수일 + 오늘 | CACHE+Today | 6 ~ 10초 | ~56건 |
| 7일 이전 과거 포함 | HYBRID / DB | 1 ~ 9초 (규모 의존) | 8~45건 |
---
## 6. 향후 확장 권장안
| 시나리오 | 캐시 보존 기간 | 메모리 | 효과 |
|----------|---------------|--------|------|
| 현재 | 7일 | 6GB | 1주 이내 CACHE 경로 |
| 2주 확장 | 14일 | 12GB | 주간 비교 분석 지원 |
| 1개월 확장 | 30일 | 25GB | 월간 항적 분석 지원 |
> 캐시 보존 기간 확장 시 HYBRID 경로 비율이 줄고 순수 CACHE 비율 증가 → DB 부담 추가 경감
---
## 7. 결론
| 항목 | 효과 |
|------|------|
| 응답 속도 | DB 대비 **5.7 ~ 12.6배** 단축 |
| DB 부하 | 커넥션 **63 ~ 89%** 감소, 쿼리 시간 **100%** 절감 |
| 데이터 품질 | 줌 레벨별 95 ~ 99% 압축, DB 경로와 동등 품질 |
| 동시 사용자 수용 | DB 커넥션 경합 해소로 동시 처리 가능 수 증가 |
| 확장성 | 캐시 보존 기간 확장으로 추가 개선 가능 |

파일 보기

@ -77,6 +77,12 @@
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<!-- WebFlux (WebClient for S&P AIS API) -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>

219
scripts/deploy-only.bat Normal file
파일 보기

@ -0,0 +1,219 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Deploy Only Script
REM (Build with IntelliJ UI first)
REM ===============================================
setlocal enabledelayedexpansion
REM Configuration
set "SERVER_IP=10.26.252.51"
set "SERVER_USER=root"
set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
set "JAR_NAME=vessel-batch-aggregation.jar"
set "BACKUP_DIR=!SERVER_PATH!/backups"
echo ===============================================
echo Signal Batch Deploy System (Deploy Only)
echo ===============================================
echo [INFO] Deploy Start: !date! !time!
echo [INFO] Target Server: !SERVER_IP!
echo.
REM 1. Set correct working directory and check JAR file
echo =============== Working Directory Setup ===============
echo [INFO] Current directory: !CD!
echo [INFO] Script directory: %~dp0
REM Change to project root directory (parent of scripts)
cd /d "%~dp0.."
echo [INFO] Project root directory: !CD!
echo.
echo =============== JAR File Check ===============
set "JAR_PATH=target\!JAR_NAME!"
if not exist "!JAR_PATH!" (
echo [ERROR] JAR file not found: !JAR_PATH!
echo [INFO] Current directory: !CD!
echo.
echo Please build the project first using IntelliJ IDEA:
echo 1. Open Maven tool window: View ^> Tool Windows ^> Maven
echo 2. Double-click: Lifecycle ^> clean
echo 3. Double-click: Lifecycle ^> package
echo 4. Verify target/!JAR_NAME! exists
echo.
echo Checking for any JAR files in target directory:
if exist "target\" (
dir target\*.jar 2>nul
if !ERRORLEVEL! neq 0 (
echo [INFO] Target directory exists but no JAR files found
)
) else (
echo [INFO] Target directory does not exist - project not built yet
)
pause
exit /b 1
)
for %%I in ("!JAR_PATH!") do (
echo [INFO] JAR File: %%~nxI
echo [INFO] File Size: %%~zI bytes
echo [INFO] Modified: %%~tI
)
echo [SUCCESS] JAR file ready for deployment
REM 2. SSH Connection Test
echo.
echo =============== SSH Connection Test ===============
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "echo 'SSH connection OK'" 2>nul
set CONNECTION_RESULT=!ERRORLEVEL!
if !CONNECTION_RESULT! neq 0 (
echo [ERROR] SSH connection failed
echo [INFO] Please check:
echo - SSH key authentication setup
echo - Network connectivity to !SERVER_IP!
echo - Server is accessible
echo.
echo Run setup-ssh-key.bat to configure SSH keys
pause
exit /b 1
)
echo [SUCCESS] SSH connection successful
REM 3. Check current server status
echo.
echo =============== Current Server Status ===============
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status" 2>nul
set SERVER_RUNNING=!ERRORLEVEL!
REM 4. Create backup
echo.
echo =============== Create Backup ===============
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "mkdir -p !BACKUP_DIR!"
REM Generate backup timestamp
for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do if not "%%I"=="" set DATETIME=%%I
set BACKUP_TIMESTAMP=!DATETIME:~0,8!_!DATETIME:~8,6!
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "if [ -f !SERVER_PATH!/!JAR_NAME! ]; then echo '[INFO] Creating backup...'; cp !SERVER_PATH!/!JAR_NAME! !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!; echo '[INFO] Backup created: !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!'; ls -la !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!; else echo '[INFO] No existing JAR file to backup (first deployment)'; fi"
REM 5. Stop application
if !SERVER_RUNNING! equ 0 (
echo.
echo =============== Stop Application ===============
echo [INFO] Stopping running application...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh stop"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to stop application
exit /b 1
)
echo [SUCCESS] Application stopped
) else (
echo.
echo [INFO] Application not running, proceeding with deployment
)
REM 6. Deploy new JAR
echo.
echo =============== Deploy New JAR ===============
echo [INFO] Transferring JAR file...
scp "!JAR_PATH!" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
if !ERRORLEVEL! neq 0 (
echo [ERROR] File transfer failed
goto :rollback_option
)
echo [INFO] Setting permissions...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "chmod 644 !SERVER_PATH!/!JAR_NAME!"
echo [SUCCESS] JAR file deployed
REM 7. Transfer version info (if exists)
echo.
echo =============== Version Information ===============
if exist "target\version.txt" (
echo [INFO] Transferring version information...
scp "target\version.txt" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
) else (
echo [INFO] No version file found, creating basic version info...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "echo 'DEPLOY_TIME=!date! !time!' > !SERVER_PATH!/version.txt"
)
REM 8. Start application
echo.
echo =============== Start Application ===============
echo [INFO] Starting application...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh start"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to start application
goto :rollback_option
)
REM 9. Wait and verify
echo.
echo =============== Deployment Verification ===============
echo [INFO] Waiting for application startup (30 seconds)...
timeout /t 30 /nobreak > nul
echo [INFO] Checking application status...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Application not running properly
goto :rollback_option
)
echo [INFO] Performing health check...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "curl -f http://localhost:8090/actuator/health --max-time 10" 2>nul
if !ERRORLEVEL! neq 0 (
echo [WARN] Health check failed, but application appears to be running
echo [INFO] Give it a few more minutes to fully start up
)
REM 10. Cleanup old backups
echo.
echo =============== Cleanup ===============
echo [INFO] Cleaning up old backups (keeping recent 7)...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !BACKUP_DIR!; ls -t !JAR_NAME!.backup.* 2>/dev/null | tail -n +8 | xargs rm -f 2>/dev/null || true; echo '[INFO] Backup cleanup completed'"
REM 11. Success
echo.
echo =============== Deployment Successful ===============
echo [SUCCESS] Deployment completed successfully!
echo [INFO] Deployment time: !date! !time!
echo [INFO] Backup created: !JAR_NAME!.backup.!BACKUP_TIMESTAMP!
echo [INFO] Server dashboard: http://!SERVER_IP!:8090/static/admin/batch-admin.html
echo [INFO] Server logs: ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh logs"
echo.
echo Quick commands:
echo server-status.bat - Check server status
echo server-logs.bat tail - Monitor logs
echo rollback.bat !BACKUP_TIMESTAMP! - Rollback if needed
goto :end
:rollback_option
echo.
echo =============== Deployment Failed ===============
echo [ERROR] Deployment failed!
echo.
set /p ROLLBACK="Attempt rollback to previous version? (y/N): "
if /i "!ROLLBACK!"=="y" (
echo [INFO] Attempting rollback...
if defined BACKUP_TIMESTAMP (
call rollback.bat !BACKUP_TIMESTAMP!
) else (
echo [ERROR] No backup timestamp available for rollback
echo [INFO] Manual recovery may be required
)
) else (
echo [INFO] Manual recovery required
echo [INFO] SSH to server: ssh !SERVER_USER!@!SERVER_IP!
echo [INFO] Check status: cd !SERVER_PATH! && ./vessel-batch-control.sh status
)
exit /b 1
:end
endlocal

파일 보기

@ -0,0 +1,47 @@
@echo off
REM ====================================
REM 조회 전용 서버 배포 스크립트 (10.29.17.90)
REM ====================================
echo ======================================
echo Query-Only Server Deployment Script
echo Target: 10.29.17.90
echo Profile: query
echo ======================================
REM 프로젝트 루트 디렉토리로 이동
cd /d %~dp0\..
REM 빌드
echo.
echo [1/3] Building project...
call mvn clean package -DskipTests
if %ERRORLEVEL% NEQ 0 (
echo Build failed!
pause
exit /b 1
)
echo.
echo [2/3] Stopping existing application...
REM SSH를 통해 원격 서버의 기존 프로세스 종료
ssh mpc@10.29.17.90 "pkill -f 'signal_batch.*query' || true"
echo.
echo [3/3] Deploying and starting application...
REM JAR 파일 복사
scp target\signal_batch-0.0.1-SNAPSHOT.jar mpc@10.29.17.90:/home/mpc/app/
REM 원격 서버에서 애플리케이션 시작 (query 프로파일)
ssh mpc@10.29.17.90 "cd /home/mpc/app && nohup java -jar signal_batch-0.0.1-SNAPSHOT.jar --spring.profiles.active=query > query-server.log 2>&1 &"
echo.
echo ======================================
echo Deployment completed!
echo Server: 10.29.17.90
echo Profile: query
echo Log: /home/mpc/app/query-server.log
echo ======================================
pause

237
scripts/deploy-safe.bat Normal file
파일 보기

@ -0,0 +1,237 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Safe Deploy Script
REM (with running application check)
REM ===============================================
setlocal enabledelayedexpansion
REM Configuration
set "SERVER_IP=10.26.252.48"
set "SERVER_USER=root"
set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
set "JAR_NAME=vessel-batch-aggregation.jar"
set "BACKUP_DIR=!SERVER_PATH!/backups"
echo ===============================================
echo Signal Batch Safe Deploy System
echo ===============================================
echo [INFO] Deploy Start: !date! !time!
echo [INFO] Target Server: !SERVER_IP!
echo.
REM Set working directory
cd /d "%~dp0.."
echo [INFO] Project directory: !CD!
REM 1. Check JAR file
echo.
echo =============== JAR File Check ===============
set "JAR_PATH=target\!JAR_NAME!"
if not exist "!JAR_PATH!" (
echo [ERROR] JAR file not found: !JAR_PATH!
echo [INFO] Please build the project first using IntelliJ Maven
pause
exit /b 1
)
for %%I in ("!JAR_PATH!") do (
echo [INFO] JAR File: %%~nxI
echo [INFO] File Size: %%~zI bytes
echo [INFO] Modified: %%~tI
)
REM 2. SSH Connection Test
echo.
echo =============== SSH Connection Test ===============
ssh !SERVER_USER!@!SERVER_IP! "echo 'SSH connection OK'" 2>nul
if !ERRORLEVEL! neq 0 (
echo [ERROR] SSH connection failed
pause
exit /b 1
)
echo [SUCCESS] SSH connection successful
REM 3. Check current application status
echo.
echo =============== Current Application Status ===============
echo [INFO] Checking if application is currently running...
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status" 2>nul
set APP_STATUS=!ERRORLEVEL!
if !APP_STATUS! equ 0 (
echo.
echo [WARNING] Application is currently RUNNING on the server!
echo.
echo =============== Deployment Options ===============
echo 1. Continue with deployment (stop → deploy → start)
echo 2. Cancel deployment (keep current version running)
echo 3. Check application details first
echo.
set /p DEPLOY_CHOICE="Choose option (1-3): "
if "!DEPLOY_CHOICE!"=="2" (
echo [INFO] Deployment cancelled by user
echo [INFO] Current application continues running
pause
exit /b 0
)
if "!DEPLOY_CHOICE!"=="3" (
echo.
echo =============== Application Details ===============
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
echo.
ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/health --max-time 5 2>/dev/null | python -m json.tool 2>/dev/null || echo 'Health endpoint not available'"
echo.
set /p FINAL_CHOICE="Proceed with deployment? (y/N): "
if /i not "!FINAL_CHOICE!"=="y" (
echo [INFO] Deployment cancelled
pause
exit /b 0
)
)
if not "!DEPLOY_CHOICE!"=="1" if not "!DEPLOY_CHOICE!"=="3" (
echo [ERROR] Invalid choice. Deployment cancelled.
pause
exit /b 1
)
echo.
echo [INFO] Proceeding with deployment...
echo [INFO] Current application will be stopped during deployment
) else (
echo [INFO] Application is not currently running
echo [INFO] Proceeding with fresh deployment
)
REM 4. Create backup timestamp
for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do if not "%%I"=="" set DATETIME=%%I
set BACKUP_TIMESTAMP=!DATETIME:~0,8!_!DATETIME:~8,6!
REM 5. Create backup (if existing JAR exists)
echo.
echo =============== Create Backup ===============
ssh !SERVER_USER!@!SERVER_IP! "mkdir -p !BACKUP_DIR!"
ssh !SERVER_USER!@!SERVER_IP! "
if [ -f !SERVER_PATH!/!JAR_NAME! ]; then
echo '[INFO] Creating backup of current version...'
cp !SERVER_PATH!/!JAR_NAME! !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!
echo '[SUCCESS] Backup created: !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!'
ls -la !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!
else
echo '[INFO] No existing JAR file to backup (first deployment)'
fi
"
REM 6. Stop application (if running)
if !APP_STATUS! equ 0 (
echo.
echo =============== Stop Current Application ===============
echo [INFO] Gracefully stopping current application...
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh stop"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to stop application gracefully
set /p FORCE_STOP="Force stop and continue? (y/N): "
if /i not "!FORCE_STOP!"=="y" (
echo [INFO] Deployment cancelled
exit /b 1
)
echo [INFO] Attempting force stop...
ssh !SERVER_USER!@!SERVER_IP! "pkill -f !JAR_NAME! || true"
)
echo [SUCCESS] Application stopped
)
REM 7. Deploy new JAR
echo.
echo =============== Deploy New Version ===============
echo [INFO] Transferring new JAR file...
scp "!JAR_PATH!" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
if !ERRORLEVEL! neq 0 (
echo [ERROR] File transfer failed
goto :deployment_failed
)
ssh !SERVER_USER!@!SERVER_IP! "chmod +x !SERVER_PATH!/!JAR_NAME!"
echo [SUCCESS] New version deployed
REM 8. Transfer version info
if exist "target\version.txt" (
scp "target\version.txt" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
)
REM 9. Start new application
echo.
echo =============== Start New Application ===============
echo [INFO] Starting new version...
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh start"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to start new application
goto :deployment_failed
)
REM 10. Verify deployment
echo.
echo =============== Verify Deployment ===============
echo [INFO] Waiting for application startup (30 seconds)...
timeout /t 30 /nobreak > nul
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
if !ERRORLEVEL! neq 0 (
echo [ERROR] New application is not running properly
goto :deployment_failed
)
echo [INFO] Performing health check...
ssh !SERVER_USER!@!SERVER_IP! "curl -f http://localhost:8090/actuator/health --max-time 10" 2>nul
if !ERRORLEVEL! neq 0 (
echo [WARN] Health check failed, but application is running
echo [INFO] Manual verification recommended
)
REM 11. Success
echo.
echo =============== Deployment Successful ===============
echo [SUCCESS] Safe deployment completed successfully!
echo [INFO] Deployment time: !date! !time!
echo [INFO] Backup: !JAR_NAME!.backup.!BACKUP_TIMESTAMP!
echo [INFO] Dashboard: http://!SERVER_IP!:8090/static/admin/batch-admin.html
echo.
echo Quick commands:
echo server-status.bat - Check status
echo server-logs.bat tail - Monitor logs
echo rollback.bat !BACKUP_TIMESTAMP! - Rollback if needed
goto :end
:deployment_failed
echo.
echo =============== Deployment Failed ===============
echo [ERROR] Deployment failed!
echo.
set /p AUTO_ROLLBACK="Attempt automatic rollback? (y/N): "
if /i "!AUTO_ROLLBACK!"=="y" (
if defined BACKUP_TIMESTAMP (
echo [INFO] Attempting rollback to: !BACKUP_TIMESTAMP!
call rollback.bat !BACKUP_TIMESTAMP!
) else (
echo [ERROR] No backup available for automatic rollback
)
) else (
echo [INFO] Manual recovery required
echo [INFO] Available backups:
ssh !SERVER_USER!@!SERVER_IP! "ls -la !BACKUP_DIR!/!JAR_NAME!.backup.* 2>/dev/null || echo 'No backups found'"
)
exit /b 1
:end
endlocal

파일 보기

@ -0,0 +1,139 @@
-- DataSource 문제 진단 SQL
-- 10.26.252.51과 10.29.17.90 양쪽에서 실행하여 비교
-- ============================================
-- 1. 현재 활성 연결 확인
-- ============================================
SELECT
pid,
usename,
application_name,
client_addr,
backend_start,
state,
query_start,
LEFT(query, 100) as current_query
FROM pg_stat_activity
WHERE datname IN ('mdadb', 'mpcdb2')
AND application_name LIKE '%vessel%'
ORDER BY backend_start DESC;
-- ============================================
-- 2. 최근 INSERT/UPDATE 통계 확인
-- ============================================
SELECT
schemaname,
tablename,
n_tup_ins as total_inserts,
n_tup_upd as total_updates,
n_tup_del as total_deletes,
n_live_tup as live_rows,
last_autoanalyze,
last_autovacuum
FROM pg_stat_user_tables
WHERE schemaname = 'signal'
AND tablename IN (
't_vessel_tracks_5min',
't_vessel_tracks_hourly',
't_vessel_tracks_daily',
't_abnormal_tracks',
't_vessel_latest_position'
)
ORDER BY n_tup_ins DESC;
-- ============================================
-- 3. 최근 데이터 확인 (마지막 INSERT 시간)
-- ============================================
-- 5분 집계
SELECT
'tracks_5min' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_vessel_tracks_5min;
-- 시간 집계
SELECT
'tracks_hourly' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_vessel_tracks_hourly;
-- 일 집계
SELECT
'tracks_daily' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_vessel_tracks_daily;
-- 비정상 궤적
SELECT
'abnormal_tracks' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_abnormal_tracks;
-- 최신 위치
SELECT
'latest_position' as table_name,
COUNT(*) as total_rows,
MAX(last_update) as last_update,
NOW() - MAX(last_update) as data_delay
FROM signal.t_vessel_latest_position;
-- ============================================
-- 4. 특정 시간대 데이터 확인 (지난 1시간)
-- ============================================
SELECT
'5min_last_hour' as category,
COUNT(*) as count,
COUNT(DISTINCT sig_src_cd) as source_count,
COUNT(DISTINCT target_id) as vessel_count
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= NOW() - INTERVAL '1 hour';
SELECT
'hourly_last_day' as category,
COUNT(*) as count,
COUNT(DISTINCT sig_src_cd) as source_count,
COUNT(DISTINCT target_id) as vessel_count
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket >= NOW() - INTERVAL '1 day';
-- ============================================
-- 5. 테이블 크기 확인
-- ============================================
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) AS indexes_size
FROM pg_tables
WHERE schemaname = 'signal'
AND tablename IN (
't_vessel_tracks_5min',
't_vessel_tracks_hourly',
't_vessel_tracks_daily',
't_abnormal_tracks',
't_vessel_latest_position'
)
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- ============================================
-- 6. 샘플 데이터 확인 (최근 10개)
-- ============================================
SELECT
sig_src_cd,
target_id,
time_bucket,
point_count,
avg_speed,
max_speed
FROM signal.t_vessel_tracks_5min
ORDER BY time_bucket DESC
LIMIT 10;

파일 보기

@ -0,0 +1,24 @@
# application.yml 또는 application-prod.yml에 추가
# 실제 SQL 에러를 확인하기 위한 로깅 설정
logging:
level:
# PostgreSQL JDBC 드라이버 로그
org.postgresql: DEBUG
org.postgresql.Driver: DEBUG
# Spring JDBC 로그
org.springframework.jdbc: DEBUG
org.springframework.jdbc.core.JdbcTemplate: DEBUG
org.springframework.jdbc.core.StatementCreatorUtils: TRACE
# Spring Batch 로그
org.springframework.batch: DEBUG
# 배치 프로세서 로그
gc.mda.signal_batch.batch.processor: DEBUG
gc.mda.signal_batch.batch.processor.HourlyTrackProcessor: TRACE
gc.mda.signal_batch.batch.processor.DailyTrackProcessor: TRACE
# SQL 쿼리 파라미터 로깅
org.springframework.jdbc.core.namedparam: TRACE

파일 보기

@ -0,0 +1,122 @@
-- Invalid geometry 수정 스크립트
-- "Too few points" 에러를 해결하기 위해 단일 포인트를 2번 반복
-- ========================================
-- 1. 백업 (선택사항)
-- ========================================
-- CREATE TABLE signal.t_vessel_tracks_5min_backup_20251107 AS
-- SELECT * FROM signal.t_vessel_tracks_5min
-- WHERE track_geom IS NOT NULL AND NOT public.ST_IsValid(track_geom);
-- ========================================
-- 2. Invalid geometry 수정 (DRY RUN - 먼저 확인)
-- ========================================
SELECT
'DRY RUN - Will fix these records' as action,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as current_points,
public.ST_AsText(track_geom) as current_wkt,
-- 수정 후 WKT 미리보기
CASE
WHEN public.ST_NPoints(track_geom) = 1 THEN
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')'
ELSE 'NO FIX NEEDED'
END as new_wkt
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%'
LIMIT 10;
-- ========================================
-- 3. 실제 수정 (확인 후 실행)
-- ========================================
-- 주의: 이 쿼리는 실제 데이터를 변경합니다!
-- DRY RUN 결과를 확인한 후 주석을 해제하고 실행하세요.
/*
UPDATE signal.t_vessel_tracks_5min
SET track_geom = public.ST_GeomFromText(
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
4326
)
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) = 1
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
*/
-- ========================================
-- 4. 수정 결과 확인
-- ========================================
SELECT
'AFTER FIX' as status,
COUNT(*) as total_records,
COUNT(CASE WHEN public.ST_IsValid(track_geom) THEN 1 END) as valid_count,
COUNT(CASE WHEN NOT public.ST_IsValid(track_geom) THEN 1 END) as invalid_count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL;
-- ========================================
-- 5. 여전히 Invalid한 geometry 확인
-- ========================================
SELECT
'REMAINING INVALID' as status,
public.ST_IsValidReason(track_geom) as reason,
COUNT(*) as count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom)
GROUP BY public.ST_IsValidReason(track_geom);
-- ========================================
-- 6. Hourly 테이블도 동일하게 수정 (필요시)
-- ========================================
/*
UPDATE signal.t_vessel_tracks_hourly
SET track_geom = public.ST_GeomFromText(
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
4326
)
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) = 1
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
*/
-- ========================================
-- 7. Daily 테이블도 동일하게 수정 (필요시)
-- ========================================
/*
UPDATE signal.t_vessel_tracks_daily
SET track_geom = public.ST_GeomFromText(
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
4326
)
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) = 1
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
*/

파일 보기

@ -0,0 +1,24 @@
# PostGIS 함수 스키마 명시 스크립트
# ST_GeomFromText -> public.ST_GeomFromText로 변경
$javaDir = "C:\Users\lht87\IdeaProjects\signal_batch\src\main\java"
$files = Get-ChildItem -Path $javaDir -Filter "*.java" -Recurse
$count = 0
foreach ($file in $files) {
$content = Get-Content $file.FullName -Raw -Encoding UTF8
# ST_GeomFromText를 public.ST_GeomFromText로 변경 (이미 public.가 붙어있지 않은 경우만)
$newContent = $content -replace '(?<!public\.)ST_GeomFromText\(', 'public.ST_GeomFromText('
# ST_Length도 변경
$newContent = $newContent -replace '(?<!public\.)ST_Length\(', 'public.ST_Length('
if ($content -ne $newContent) {
Set-Content -Path $file.FullName -Value $newContent -Encoding UTF8 -NoNewline
Write-Host "Updated: $($file.FullName)"
$count++
}
}
Write-Host "`nTotal files updated: $count"

파일 보기

@ -0,0 +1,223 @@
#!/bin/bash
# Spring Batch 메타데이터 강제 초기화 스크립트
# 실행 중인 작업 상태와 관계없이 강제로 초기화
echo "================================================"
echo "Spring Batch Metadata FORCE Reset"
echo "WARNING: This will FORCE delete ALL batch job history!"
echo " Including running jobs!"
echo "Time: $(date)"
echo "================================================"
# 색상 코드
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# 데이터베이스 연결 정보
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="mdadb"
DB_USER="mda"
DB_SCHEMA="public"
echo -e "${RED}⚠️ CRITICAL WARNING: This is a FORCE reset operation!${NC}"
echo "This will:"
echo "- Delete ALL batch job history"
echo "- Clear ALL running job states"
echo "- Reset ALL sequences"
echo "- Cannot be undone (except from backup)"
echo ""
echo -e "${YELLOW}This should only be used when normal reset fails!${NC}"
echo ""
read -p "Type 'FORCE RESET' to confirm: " CONFIRM
if [ "$CONFIRM" != "FORCE RESET" ]; then
echo "Operation cancelled."
exit 0
fi
echo ""
echo "1. Creating full backup before force reset..."
# 백업 디렉토리 생성
BACKUP_DIR="/devdata/apps/bridge-db-monitoring/backup"
mkdir -p $BACKUP_DIR
# 백업 파일명
BACKUP_FILE="$BACKUP_DIR/batch_metadata_FORCE_backup_$(date +%Y%m%d_%H%M%S).sql"
# 전체 메타데이터 백업 (스키마 포함)
pg_dump -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME \
--schema=$DB_SCHEMA \
--table="batch_*" \
--file=$BACKUP_FILE 2>/dev/null
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Full backup created: $BACKUP_FILE${NC}"
else
echo -e "${YELLOW}⚠ Backup may have failed, but continuing...${NC}"
fi
echo ""
echo "2. Stopping application if running..."
# PID 확인
if [ -f "/devdata/apps/bridge-db-monitoring/vessel-batch.pid" ]; then
PID=$(cat /devdata/apps/bridge-db-monitoring/vessel-batch.pid)
if kill -0 $PID 2>/dev/null; then
echo " Stopping application (PID: $PID)..."
kill -15 $PID
sleep 5
if kill -0 $PID 2>/dev/null; then
echo " Force killing application..."
kill -9 $PID
fi
fi
fi
echo ""
echo "3. FORCE resetting batch metadata tables..."
# CASCADE를 사용한 강제 초기화
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME << EOF
-- 트랜잭션 시작
BEGIN;
-- 외래 키 제약 임시 비활성화
SET session_replication_role = 'replica';
-- 모든 배치 테이블 강제 초기화
TRUNCATE TABLE $DB_SCHEMA.batch_step_execution_context CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_step_execution CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_execution_context CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_execution_params CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_execution CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_instance CASCADE;
-- 시퀀스 강제 리셋
ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_job_execution_seq RESTART WITH 1;
ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_job_seq RESTART WITH 1;
ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_step_execution_seq RESTART WITH 1;
-- 외래 키 제약 재활성화
SET session_replication_role = 'origin';
-- 커밋
COMMIT;
-- 통계 업데이트
ANALYZE $DB_SCHEMA.batch_job_instance;
ANALYZE $DB_SCHEMA.batch_job_execution;
ANALYZE $DB_SCHEMA.batch_job_execution_params;
ANALYZE $DB_SCHEMA.batch_job_execution_context;
ANALYZE $DB_SCHEMA.batch_step_execution;
ANALYZE $DB_SCHEMA.batch_step_execution_context;
EOF
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Batch metadata tables FORCE reset successfully${NC}"
else
echo -e "${RED}✗ Force reset encountered errors, but may have partially succeeded${NC}"
fi
echo ""
echo "4. Verifying force reset..."
# 각 테이블 개별 확인
for table in batch_job_instance batch_job_execution batch_job_execution_params batch_job_execution_context batch_step_execution batch_step_execution_context; do
COUNT=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -t -c "
SELECT COUNT(*) FROM $DB_SCHEMA.$table;" 2>/dev/null | xargs)
if [ -z "$COUNT" ]; then
COUNT="ERROR"
fi
if [ "$COUNT" = "0" ]; then
echo -e " ${GREEN}${NC} $table: $COUNT records"
elif [ "$COUNT" = "ERROR" ]; then
echo -e " ${RED}${NC} $table: Could not query"
else
echo -e " ${YELLOW}${NC} $table: $COUNT records remaining"
fi
done
echo ""
echo "5. Optional: Clear ALL aggregation data (complete fresh start)"
read -p "Do you want to clear ALL aggregation data too? (yes/no): " CLEAR_ALL
if [ "$CLEAR_ALL" = "yes" ]; then
echo ""
echo "Clearing ALL aggregation data..."
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME << EOF
BEGIN;
-- 강제로 모든 집계 데이터 초기화
SET session_replication_role = 'replica';
-- 최신 위치 정보
TRUNCATE TABLE signal.t_vessel_latest_position CASCADE;
-- 모든 파티션 테이블 초기화
DO \$\$
DECLARE
r RECORD;
BEGIN
FOR r IN
SELECT tablename
FROM pg_tables
WHERE schemaname = 'signal'
AND (tablename LIKE 't_tile_summary_%'
OR tablename LIKE 't_area_statistics_%'
OR tablename LIKE 't_vessel_daily_tracks_%')
LOOP
EXECUTE 'TRUNCATE TABLE signal.' || r.tablename || ' CASCADE';
RAISE NOTICE 'Truncated table: signal.%', r.tablename;
END LOOP;
END\$\$;
-- 배치 성능 메트릭
TRUNCATE TABLE signal.t_batch_performance_metrics CASCADE;
SET session_replication_role = 'origin';
COMMIT;
EOF
echo -e "${GREEN}✓ All aggregation data cleared${NC}"
fi
echo ""
echo "================================================"
echo "FORCE Reset Complete!"
echo ""
echo -e "${YELLOW}IMPORTANT: The application needs to be restarted!${NC}"
echo ""
echo "Next steps:"
echo "1. Start the application:"
echo " cd /devdata/apps/bridge-db-monitoring"
echo " ./run-on-query-server.sh"
echo ""
echo "2. Verify health:"
echo " curl http://localhost:8090/actuator/health"
echo ""
echo "3. Start fresh batch job:"
echo " curl -X POST http://localhost:8090/admin/batch/job/run \\"
echo " -H 'Content-Type: application/json' \\"
echo " -d '{\"jobName\": \"vesselAggregationJob\", \"parameters\": {\"tileLevel\": 1}}'"
echo ""
echo "Full backup saved to: $BACKUP_FILE"
echo "================================================"
# 자동 시작 옵션
echo ""
read -p "Do you want to start the application now? (yes/no): " START_NOW
if [ "$START_NOW" = "yes" ]; then
echo "Starting application..."
cd /devdata/apps/bridge-db-monitoring
./run-on-query-server.sh
fi

파일 보기

@ -0,0 +1,59 @@
-- PostGIS를 signal 스키마에 설치하는 스크립트
-- 10.29.17.90 서버의 mpcdb2 데이터베이스에서 실행
-- 방법 1: signal 스키마에 PostGIS extension 생성 (권장)
-- 이미 public에 설치되어 있다면, signal 스키마에 함수들을 복사하는 방식으로 접근
-- 현재 PostGIS 상태 확인
SELECT extname, extversion, nspname
FROM pg_extension e
JOIN pg_namespace n ON e.extnamespace = n.oid
WHERE extname LIKE 'post%';
-- 옵션 1: signal 스키마에 PostGIS 함수 wrapper 생성
-- (public 스키마의 함수를 호출하는 wrapper)
CREATE OR REPLACE FUNCTION signal.ST_GeomFromText(text)
RETURNS geometry
AS $$
SELECT public.ST_GeomFromText($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_GeomFromText(text, integer)
RETURNS geometry
AS $$
SELECT public.ST_GeomFromText($1, $2);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_Length(geometry)
RETURNS double precision
AS $$
SELECT public.ST_Length($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_MakeLine(geometry[])
RETURNS geometry
AS $$
SELECT public.ST_MakeLine($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
-- 자주 사용하는 다른 함수들도 추가
CREATE OR REPLACE FUNCTION signal.ST_X(geometry)
RETURNS double precision
AS $$
SELECT public.ST_X($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_Y(geometry)
RETURNS double precision
AS $$
SELECT public.ST_Y($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_M(geometry)
RETURNS double precision
AS $$
SELECT public.ST_M($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
-- 검증
SELECT signal.ST_GeomFromText('POINT(126.0 37.0)', 4326);

파일 보기

@ -0,0 +1,85 @@
-- 실패한 배치 Job 조회 및 분석
-- 1. 실패한 Job 목록 (최근 50개)
SELECT
'=== FAILED JOBS (Recent 50) ===' as category,
bje.JOB_EXECUTION_ID,
bji.JOB_NAME,
bje.START_TIME,
bje.END_TIME,
bje.STATUS,
bje.EXIT_CODE,
LEFT(bje.EXIT_MESSAGE, 100) as EXIT_MESSAGE_SHORT,
-- Job Parameters 표시
(SELECT string_agg(PARAMETER_NAME || '=' || PARAMETER_VALUE, ', ')
FROM BATCH_JOB_EXECUTION_PARAMS
WHERE JOB_EXECUTION_ID = bje.JOB_EXECUTION_ID
AND IDENTIFYING = 'Y') as PARAMETERS
FROM BATCH_JOB_EXECUTION bje
JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
WHERE bje.STATUS = 'FAILED'
ORDER BY bje.JOB_EXECUTION_ID DESC
LIMIT 50;
-- 2. 실패한 Step 상세 정보
SELECT
'=== FAILED STEPS ===' as category,
bse.STEP_EXECUTION_ID,
bse.JOB_EXECUTION_ID,
bji.JOB_NAME,
bse.STEP_NAME,
bse.STATUS,
bse.READ_COUNT,
bse.WRITE_COUNT,
bse.COMMIT_COUNT,
bse.ROLLBACK_COUNT,
bse.READ_SKIP_COUNT,
bse.PROCESS_SKIP_COUNT,
bse.WRITE_SKIP_COUNT,
LEFT(bse.EXIT_MESSAGE, 100) as EXIT_MESSAGE_SHORT
FROM BATCH_STEP_EXECUTION bse
JOIN BATCH_JOB_EXECUTION bje ON bse.JOB_EXECUTION_ID = bje.JOB_EXECUTION_ID
JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
WHERE bse.STATUS = 'FAILED'
ORDER BY bse.STEP_EXECUTION_ID DESC
LIMIT 50;
-- 3. Job 타입별 실패 통계
SELECT
'=== FAILURE STATISTICS BY JOB ===' as category,
bji.JOB_NAME,
COUNT(*) as FAILED_COUNT,
MAX(bje.END_TIME) as LAST_FAILURE_TIME
FROM BATCH_JOB_EXECUTION bje
JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
WHERE bje.STATUS = 'FAILED'
GROUP BY bji.JOB_NAME
ORDER BY FAILED_COUNT DESC;
-- 4. Step별 실패 통계
SELECT
'=== FAILURE STATISTICS BY STEP ===' as category,
STEP_NAME,
COUNT(*) as FAILED_COUNT,
MAX(END_TIME) as LAST_FAILURE_TIME
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'FAILED'
GROUP BY STEP_NAME
ORDER BY FAILED_COUNT DESC;
-- 5. 최근 24시간 실패 현황
SELECT
'=== LAST 24 HOURS ===' as category,
COUNT(*) as FAILED_JOBS_24H
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'FAILED'
AND START_TIME >= CURRENT_TIMESTAMP - INTERVAL '24 hours';
-- 6. 전체 상태 요약
SELECT
'=== STATUS SUMMARY ===' as category,
STATUS,
COUNT(*) as COUNT
FROM BATCH_JOB_EXECUTION
GROUP BY STATUS
ORDER BY COUNT DESC;

파일 보기

@ -0,0 +1,75 @@
-- 실패한 배치 Job과 Step을 ABANDONED 상태로 변경
-- 주의: 이 스크립트는 실패한 job을 강제로 종료시킵니다.
-- 재시도가 필요한 경우 이 스크립트를 실행하지 마세요.
-- 1. 현재 실패 상태 확인
SELECT
'=== BEFORE UPDATE ===' as status,
COUNT(*) as failed_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'FAILED';
SELECT
'=== BEFORE UPDATE ===' as status,
COUNT(*) as failed_steps
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'FAILED';
-- 2. 실패한 STEP을 ABANDONED로 변경
UPDATE BATCH_STEP_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: FAILED',
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS = 'FAILED';
-- 3. 실패한 JOB을 ABANDONED로 변경
UPDATE BATCH_JOB_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: FAILED',
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS = 'FAILED';
-- 4. 업데이트 후 상태 확인
SELECT
'=== AFTER UPDATE ===' as status,
COUNT(*) as failed_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'FAILED';
SELECT
'=== AFTER UPDATE ===' as status,
COUNT(*) as failed_steps
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'FAILED';
SELECT
'=== ABANDONED COUNT ===' as status,
COUNT(*) as abandoned_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'ABANDONED';
SELECT
'=== ABANDONED COUNT ===' as status,
COUNT(*) as abandoned_steps
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'ABANDONED';
-- 5. 최근 ABANDONED 처리된 Job 목록 확인
SELECT
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE,
EXIT_MESSAGE
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'ABANDONED'
ORDER BY JOB_EXECUTION_ID DESC
LIMIT 10;

파일 보기

@ -0,0 +1,75 @@
-- 특정 JOB_EXECUTION_ID를 ABANDONED로 변경
-- 사용법: :job_execution_id 를 실제 ID로 변경 후 실행
-- 변수 설정 (PostgreSQL에서는 psql 변수 사용)
-- psql -v job_id=12345 -f mark-specific-job-as-abandoned.sql
-- 또는 아래 :job_execution_id 를 직접 숫자로 변경
-- 1. 해당 Job 상태 확인
SELECT
'=== BEFORE UPDATE ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE,
EXIT_MESSAGE
FROM BATCH_JOB_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id;
-- 2. 해당 Job의 Step들 상태 확인
SELECT
'=== STEPS BEFORE UPDATE ===' as status,
STEP_EXECUTION_ID,
STEP_NAME,
STATUS,
EXIT_CODE
FROM BATCH_STEP_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id
ORDER BY STEP_EXECUTION_ID;
-- 3. Step을 ABANDONED로 변경
UPDATE BATCH_STEP_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: ' || STATUS,
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE JOB_EXECUTION_ID = :job_execution_id
AND STATUS IN ('FAILED', 'STARTED', 'STOPPING');
-- 4. Job을 ABANDONED로 변경
UPDATE BATCH_JOB_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: ' || STATUS,
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE JOB_EXECUTION_ID = :job_execution_id
AND STATUS IN ('FAILED', 'STARTED', 'STOPPING');
-- 5. 업데이트 결과 확인
SELECT
'=== AFTER UPDATE ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE,
EXIT_MESSAGE
FROM BATCH_JOB_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id;
SELECT
'=== STEPS AFTER UPDATE ===' as status,
STEP_EXECUTION_ID,
STEP_NAME,
STATUS,
EXIT_CODE
FROM BATCH_STEP_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id
ORDER BY STEP_EXECUTION_ID;

파일 보기

@ -0,0 +1,212 @@
#!/bin/bash
# Query DB 서버 리소스 모니터링 스크립트
# PostgreSQL과 배치 애플리케이션 리소스 경합 모니터링
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
# Java 경로 (jstat 명령어용)
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JSTAT="$JAVA_HOME/bin/jstat"
# 색상 코드
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# CSV 헤더 생성 (첫 실행 시)
if [ ! -f "$LOG_DIR/resource-monitor.csv" ]; then
echo "timestamp,pg_cpu,java_cpu,delay_minutes,throughput,collect_connections" > $LOG_DIR/resource-monitor.csv
fi
while true; do
clear
echo "========================================="
echo "Vessel Batch Resource Monitor"
echo "Time: $(date)"
echo "App Home: $APP_HOME"
echo "========================================="
# PID 파일에서 프로세스 ID 읽기
if [ -f "$APP_HOME/vessel-batch.pid" ]; then
JAVA_PID=$(cat $APP_HOME/vessel-batch.pid)
else
JAVA_PID=$(pgrep -f "vessel-batch-aggregation.jar")
fi
# 1. CPU 사용률
echo -e "\n${GREEN}[CPU Usage]${NC}"
# PostgreSQL CPU 사용률
PG_CPU=$(ps aux | grep postgres | grep -v grep | awk '{sum+=$3} END {printf "%.1f", sum}' || echo "0")
if [ -z "$PG_CPU" ]; then PG_CPU="0"; fi
echo "PostgreSQL Total: ${PG_CPU}%"
# Java 배치 CPU 사용률
if [ ! -z "$JAVA_PID" ] && kill -0 $JAVA_PID 2>/dev/null; then
JAVA_CPU=$(ps aux | grep $JAVA_PID | grep -v grep | awk '{printf "%.1f", $3}' || echo "0")
if [ -z "$JAVA_CPU" ]; then JAVA_CPU="0"; fi
echo "Batch Application: ${JAVA_CPU}% (PID: $JAVA_PID)"
else
JAVA_CPU="0.0"
echo "Batch Application: Not Running"
fi
# Top 5 PostgreSQL 프로세스
echo -e "\nTop PostgreSQL Processes:"
ps aux | grep postgres | grep -v grep | sort -k3 -nr | head -5 | awk '{printf " %-8s %5s%% %s\n", $2, $3, $11}'
# 2. 메모리 사용률
echo -e "\n${GREEN}[Memory Usage]${NC}"
free -h | grep -E "Mem|Swap"
# PostgreSQL 공유 메모리
PG_SHARED=$(ipcs -m 2>/dev/null | grep postgres | awk '{sum+=$5} END {printf "%.1f", sum/1024/1024/1024}')
if [ ! -z "$PG_SHARED" ]; then
echo "PostgreSQL Shared Memory: ${PG_SHARED}GB"
fi
# Java 힙 사용률
if [ ! -z "$JAVA_PID" ] && kill -0 $JAVA_PID 2>/dev/null; then
if [ -x "$JSTAT" ]; then
JAVA_HEAP=$($JSTAT -gc $JAVA_PID 2>/dev/null | tail -1 | awk '{printf "%.1f", ($3+$4+$6+$8)/1024}')
if [ ! -z "$JAVA_HEAP" ]; then
echo "Java Heap Used: ${JAVA_HEAP}MB"
fi
fi
fi
# 3. 디스크 I/O
echo -e "\n${GREEN}[Disk I/O]${NC}"
iostat -x 1 2 2>/dev/null | grep -A5 "Device" | tail -n +7 | head -5
# 4. PostgreSQL 연결 상태
echo -e "\n${GREEN}[Database Connections]${NC}"
# psql 명령어가 PATH에 없을 수 있으므로 전체 경로 사용 시도
if command -v psql >/dev/null 2>&1; then
PSQL_CMD="psql"
else
# 일반적인 PostgreSQL 설치 경로들
for path in /usr/pgsql-*/bin/psql /usr/bin/psql /usr/local/bin/psql; do
if [ -x "$path" ]; then
PSQL_CMD="$path"
break
fi
done
fi
if [ ! -z "$PSQL_CMD" ]; then
$PSQL_CMD -h localhost -U mda -d mdadb -c "
SELECT
application_name,
client_addr,
COUNT(*) as connections,
string_agg(DISTINCT state, ', ') as states
FROM pg_stat_activity
WHERE datname = 'mdadb'
GROUP BY application_name, client_addr
ORDER BY connections DESC
LIMIT 10;" 2>/dev/null || echo "Unable to query database connections"
else
echo "psql command not found"
fi
# 5. 배치 처리 상태
echo -e "\n${GREEN}[Batch Processing Status]${NC}"
if [ ! -z "$PSQL_CMD" ]; then
# 처리 지연 확인
DELAY=$($PSQL_CMD -h localhost -U mda -d mdadb -t -c "
SELECT COALESCE(EXTRACT(EPOCH FROM (NOW() - MAX(last_update))) / 60, 0)::numeric(10,1)
FROM signal.t_vessel_latest_position;" 2>/dev/null | xargs)
if [ ! -z "$DELAY" ] && [ "$DELAY" != "" ]; then
if [ $(echo "$DELAY > 120" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${RED}Processing Delay: ${DELAY} minutes ⚠️${NC}"
elif [ $(echo "$DELAY > 60" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${YELLOW}Processing Delay: ${DELAY} minutes ⚠️${NC}"
else
echo -e "${GREEN}Processing Delay: ${DELAY} minutes ✓${NC}"
fi
else
DELAY="0"
echo "Processing Delay: Unable to determine"
fi
# 최근 처리량
THROUGHPUT=$($PSQL_CMD -h localhost -U mda -d mdadb -t -c "
SELECT COALESCE(COUNT(*), 0)
FROM signal.t_vessel_latest_position
WHERE last_update > NOW() - INTERVAL '1 minute';" 2>/dev/null | xargs)
if [ ! -z "$THROUGHPUT" ]; then
echo "Throughput: ${THROUGHPUT} vessels/minute"
else
THROUGHPUT="0"
echo "Throughput: Unable to determine"
fi
else
DELAY="0"
THROUGHPUT="0"
echo "Database metrics unavailable (psql not found)"
fi
# 6. 네트워크 연결 (수집 DB)
echo -e "\n${GREEN}[Network to Collect DB]${NC}"
COLLECT_CONN=$(ss -tunp 2>/dev/null | grep :5432 | grep 10.26.252.39 | wc -l)
echo "Active connections to collect DB: ${COLLECT_CONN}"
# 네트워크 통계
if [ "$COLLECT_CONN" -gt 0 ]; then
ss -i dst 10.26.252.39:5432 2>/dev/null | grep -E "rtt|cwnd" | head -3
fi
# 7. 애플리케이션 로그 최근 에러
echo -e "\n${GREEN}[Recent Application Errors]${NC}"
if [ -f "$LOG_DIR/app.log" ]; then
ERROR_COUNT=$(grep -c "ERROR" $LOG_DIR/app.log 2>/dev/null || echo 0)
echo "Total Errors in Log: $ERROR_COUNT"
# 최근 5개 에러 표시
if [ "$ERROR_COUNT" -gt 0 ]; then
echo "Recent Errors:"
grep "ERROR" $LOG_DIR/app.log | tail -5 | cut -c1-120
fi
else
echo "Log file not found at $LOG_DIR/app.log"
fi
# 8. 경고 사항
echo -e "\n${YELLOW}[Warnings]${NC}"
# CPU 경고
TOTAL_CPU=$(echo "$PG_CPU + $JAVA_CPU" | bc 2>/dev/null || echo "0")
if [ ! -z "$TOTAL_CPU" ] && [ "$TOTAL_CPU" != "0" ]; then
if [ $(echo "$TOTAL_CPU > 80" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${RED}⚠ High CPU usage: ${TOTAL_CPU}%${NC}"
fi
fi
# 메모리 경고
MEM_AVAILABLE=$(free -g | grep Mem | awk '{print $7}')
if [ ! -z "$MEM_AVAILABLE" ] && [ "$MEM_AVAILABLE" -lt 10 ]; then
echo -e "${RED}⚠ Low available memory: ${MEM_AVAILABLE}GB${NC}"
fi
# 처리 지연 경고
if [ ! -z "$DELAY" ] && [ "$DELAY" != "0" ]; then
if [ $(echo "$DELAY > 120" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${RED}⚠ Processing delay exceeds 2 hours!${NC}"
fi
fi
# 로그에 기록
echo "$(date '+%Y-%m-%d %H:%M:%S'),${PG_CPU},${JAVA_CPU},${DELAY},${THROUGHPUT},${COLLECT_CONN}" >> $LOG_DIR/resource-monitor.csv
# 다음 업데이트까지 대기
echo -e "\n${GREEN}Next update in 30 seconds... (Ctrl+C to exit)${NC}"
sleep 30
done

154
scripts/monitor-realtime.sh Normal file
파일 보기

@ -0,0 +1,154 @@
#!/bin/bash
# 실시간 시스템 모니터링 스크립트
# 부하 테스트 중 시스템 상태를 실시간으로 모니터링
# 색상 정의
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# 애플리케이션 정보
APP_HOST="10.26.252.48"
APP_PORT="8090"
DB_HOST_COLLECT="10.26.252.39"
DB_HOST_QUERY="10.26.252.48"
DB_PORT="5432"
DB_NAME="mdadb"
DB_USER="mdauser"
# 화면 지우기
clear_screen() {
clear
}
# 헤더 출력
print_header() {
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} 선박 궤적 시스템 실시간 모니터링 ${NC}"
echo -e "${BLUE}========================================${NC}"
echo -e "시간: $(date '+%Y-%m-%d %H:%M:%S')"
echo ""
}
# 애플리케이션 상태 확인
check_app_status() {
echo -e "${GREEN}[애플리케이션 상태]${NC}"
# Health check
health=$(curl -s "http://$APP_HOST:$APP_PORT/actuator/health" | jq -r '.status' 2>/dev/null || echo "UNKNOWN")
if [ "$health" == "UP" ]; then
echo -e "상태: ${GREEN}$health${NC}"
else
echo -e "상태: ${RED}$health${NC}"
fi
# 실행 중인 Job
running_jobs=$(curl -s "http://$APP_HOST:$APP_PORT/admin/batch/job/running" | jq -r '.[]' 2>/dev/null || echo "N/A")
echo -e "실행 중인 Job: $running_jobs"
# 메트릭 요약
metrics=$(curl -s "http://$APP_HOST:$APP_PORT/admin/metrics/summary" 2>/dev/null)
if [ ! -z "$metrics" ]; then
echo -e "처리된 레코드: $(echo $metrics | jq -r '.processedRecords // "N/A"')"
echo -e "평균 처리 시간: $(echo $metrics | jq -r '.avgProcessingTime // "N/A"')ms"
fi
echo ""
}
# 시스템 리소스 모니터링
check_system_resources() {
echo -e "${GREEN}[시스템 리소스]${NC}"
# CPU 사용률
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
echo -e "CPU 사용률: ${cpu_usage}%"
# 메모리 사용률
mem_info=$(free -g | grep "Mem:")
mem_total=$(echo $mem_info | awk '{print $2}')
mem_used=$(echo $mem_info | awk '{print $3}')
mem_percent=$(awk "BEGIN {printf \"%.1f\", ($mem_used/$mem_total)*100}")
echo -e "메모리: ${mem_used}GB / ${mem_total}GB (${mem_percent}%)"
# 디스크 사용률
disk_usage=$(df -h / | tail -1 | awk '{print $5}')
echo -e "디스크 사용률: $disk_usage"
echo ""
}
# 데이터베이스 연결 모니터링
check_db_connections() {
echo -e "${GREEN}[데이터베이스 연결]${NC}"
# CollectDB 연결
collect_conn=$(PGPASSWORD=$DB_PASS psql -h $DB_HOST_COLLECT -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null || echo "N/A")
echo -e "CollectDB 연결: $collect_conn"
# QueryDB 연결
query_conn=$(PGPASSWORD=$DB_PASS psql -h $DB_HOST_QUERY -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null || echo "N/A")
echo -e "QueryDB 연결: $query_conn"
echo ""
}
# WebSocket 연결 모니터링
check_websocket_status() {
echo -e "${GREEN}[WebSocket 상태]${NC}"
ws_status=$(curl -s "http://$APP_HOST:$APP_PORT/api/websocket/status" 2>/dev/null)
if [ ! -z "$ws_status" ]; then
echo -e "활성 세션: $(echo $ws_status | jq -r '.activeSessions // "N/A"')"
echo -e "활성 쿼리: $(echo $ws_status | jq -r '.activeQueries // "N/A"')"
echo -e "처리된 메시지: $(echo $ws_status | jq -r '.totalMessagesProcessed // "N/A"')"
else
echo -e "WebSocket 상태를 가져올 수 없습니다."
fi
echo ""
}
# 성능 최적화 상태
check_performance_status() {
echo -e "${GREEN}[성능 최적화 상태]${NC}"
perf_status=$(curl -s "http://$APP_HOST:$APP_PORT/api/v1/performance/status" 2>/dev/null)
if [ ! -z "$perf_status" ]; then
echo -e "동적 청크 크기: $(echo $perf_status | jq -r '.currentChunkSize // "N/A"')"
echo -e "캐시 히트율: $(echo $perf_status | jq -r '.cacheHitRate // "N/A"')%"
echo -e "메모리 사용률: $(echo $perf_status | jq -r '.memoryUsage.usedPercentage // "N/A"')%"
else
echo -e "성능 상태를 가져올 수 없습니다."
fi
echo ""
}
# 실시간 로그 tail (별도 터미널에서 실행)
tail_logs() {
echo -e "${GREEN}[최근 로그]${NC}"
echo "애플리케이션 로그는 별도 터미널에서 확인하세요:"
echo "tail -f /path/to/application.log"
echo ""
}
# 메인 루프
main() {
while true; do
clear_screen
print_header
check_app_status
check_system_resources
check_db_connections
check_websocket_status
check_performance_status
echo -e "${YELLOW}5초 후 갱신... (Ctrl+C로 종료)${NC}"
sleep 5
done
}
# 트랩 설정
trap 'echo -e "\n${RED}모니터링 종료${NC}"; exit 0' INT TERM
# 실행
main

파일 보기

@ -0,0 +1,50 @@
-- 빠른 Invalid Geometry 확인
-- 1. t_vessel_tracks_5min에 실제로 invalid geometry가 있는가?
SELECT
'5min table - invalid count' as check_type,
COUNT(*) as invalid_count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom);
-- 2. 어떤 invalid 이유인가?
SELECT
'5min table - invalid reasons' as check_type,
public.ST_IsValidReason(track_geom) as reason,
COUNT(*) as count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom)
GROUP BY public.ST_IsValidReason(track_geom);
-- 3. 실제 invalid 샘플 확인
SELECT
'5min table - invalid samples' as check_type,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as point_count,
public.ST_AsText(track_geom) as wkt,
public.ST_IsValidReason(track_geom) as reason
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom)
LIMIT 5;
-- 4. 에러 발생한 선박 확인 (vessel 000001_###0000072)
SELECT
'Problem vessel check' as check_type,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as point_count,
public.ST_IsValid(track_geom) as is_valid,
public.ST_IsValidReason(track_geom) as reason,
public.ST_AsText(track_geom) as wkt
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = '000001'
AND target_id LIKE '%0000072'
AND time_bucket >= CURRENT_TIMESTAMP - INTERVAL '1 day'
ORDER BY time_bucket DESC
LIMIT 10;

파일 보기

@ -0,0 +1,269 @@
-- ========================================
-- 실제 데이터로 즉시 테스트 (변수 없음)
-- 최근 데이터 자동 선택
-- ========================================
-- 1. 최근 1시간 내 데이터가 있는 선박 자동 선택
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== AUTO SELECTED VESSEL ===' as section,
sig_src_cd,
target_id,
hour_bucket,
hour_bucket + INTERVAL '1 hour' as hour_end
FROM recent_vessel;
-- 2. 선택된 선박의 5분 데이터 확인
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== 5MIN DATA ===' as section,
t.sig_src_cd,
t.target_id,
t.time_bucket,
public.ST_NPoints(t.track_geom) as points,
public.ST_IsValid(t.track_geom) as is_valid,
LENGTH(public.ST_AsText(t.track_geom)) as wkt_length,
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)') as extracted_coords
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
ORDER BY t.time_bucket;
-- 3. string_agg 테스트
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== STRING_AGG RESULT ===' as section,
t.sig_src_cd,
t.target_id,
string_agg(
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
','
ORDER BY t.time_bucket
) FILTER (WHERE t.track_geom IS NOT NULL) as all_coords,
COUNT(*) as track_count,
LENGTH(string_agg(
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
','
ORDER BY t.time_bucket
) FILTER (WHERE t.track_geom IS NOT NULL)) as coords_total_length
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
GROUP BY t.sig_src_cd, t.target_id;
-- 4. Geometry 생성 테스트
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
),
merged_coords AS (
SELECT
t.sig_src_cd,
t.target_id,
string_agg(
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
','
ORDER BY t.time_bucket
) FILTER (WHERE t.track_geom IS NOT NULL) as all_coords
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
GROUP BY t.sig_src_cd, t.target_id
)
SELECT
'=== GEOMETRY CREATION TEST ===' as section,
sig_src_cd,
target_id,
all_coords IS NOT NULL as has_coords,
LENGTH(all_coords) as coords_length,
public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326) as merged_geom,
public.ST_NPoints(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as merged_points,
public.ST_IsValid(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as is_valid
FROM merged_coords;
-- 5. 전체 집계 쿼리 실행 (실제 HourlyTrackProcessor와 동일)
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
),
ordered_tracks AS (
SELECT t.*
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
ORDER BY t.time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
rv.hour_bucket as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
CROSS JOIN recent_vessel rv
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== FULL AGGREGATION RESULT ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
time_diff_seconds
FROM calculated_tracks;
-- 6. 에러 발생 가능성 체크
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== ERROR CHECK ===' as section,
COUNT(*) as total_tracks,
COUNT(CASE WHEN track_geom IS NULL THEN 1 END) as null_geom_count,
COUNT(CASE WHEN NOT public.ST_IsValid(track_geom) THEN 1 END) as invalid_geom_count,
COUNT(CASE WHEN public.ST_NPoints(track_geom) = 0 THEN 1 END) as zero_points_count,
COUNT(CASE WHEN public.ST_NPoints(track_geom) = 1 THEN 1 END) as single_point_count,
COUNT(CASE WHEN
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)') IS NULL
THEN 1 END) as regex_fail_count
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour';
-- ========================================
-- 사용 방법:
-- 1. 그냥 전체 스크립트 실행
-- 2. 자동으로 최근 선박 선택됨
-- 3. 각 섹션별 결과 확인
--
-- 에러 발생시 확인 사항:
-- - "ERROR CHECK" 섹션에서 이상값 확인
-- - "STRING_AGG RESULT"에서 all_coords 확인
-- - "GEOMETRY CREATION TEST"에서 is_valid 확인
-- ========================================

288
scripts/run-load-test.sh Normal file
파일 보기

@ -0,0 +1,288 @@
#!/bin/bash
# 선박 궤적 집계 시스템 부하 테스트 실행 스크립트
# 실행 전 JMeter가 설치되어 있어야 합니다.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
JMETER_HOME="${JMETER_HOME:-/opt/jmeter}"
RESULTS_DIR="$PROJECT_ROOT/load-test-results"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# 색상 정의
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# 함수: 메시지 출력
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# JMeter 설치 확인
check_jmeter() {
if [ ! -d "$JMETER_HOME" ]; then
log_error "JMeter가 설치되어 있지 않습니다. JMETER_HOME을 설정하세요."
exit 1
fi
if [ ! -f "$JMETER_HOME/bin/jmeter" ]; then
log_error "JMeter 실행 파일을 찾을 수 없습니다: $JMETER_HOME/bin/jmeter"
exit 1
fi
log_info "JMeter 경로: $JMETER_HOME"
}
# 결과 디렉토리 생성
create_results_dir() {
mkdir -p "$RESULTS_DIR/$TIMESTAMP"
log_info "결과 디렉토리 생성: $RESULTS_DIR/$TIMESTAMP"
}
# 시스템 상태 모니터링 시작
start_monitoring() {
log_info "시스템 모니터링 시작..."
# CPU, 메모리, 네트워크 사용률 모니터링
nohup vmstat 5 > "$RESULTS_DIR/$TIMESTAMP/vmstat.log" 2>&1 &
VMSTAT_PID=$!
nohup iostat -x 5 > "$RESULTS_DIR/$TIMESTAMP/iostat.log" 2>&1 &
IOSTAT_PID=$!
# 데이터베이스 연결 모니터링
nohup watch -n 5 "psql -h 10.26.252.48 -U mdauser -d mdadb -c 'SELECT count(*) FROM pg_stat_activity;'" > "$RESULTS_DIR/$TIMESTAMP/db_connections.log" 2>&1 &
DB_MON_PID=$!
echo "$VMSTAT_PID $IOSTAT_PID $DB_MON_PID" > "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
}
# 시스템 모니터링 중지
stop_monitoring() {
log_info "시스템 모니터링 중지..."
if [ -f "$RESULTS_DIR/$TIMESTAMP/monitoring.pids" ]; then
while read pid; do
kill $pid 2>/dev/null
done < "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
rm "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
fi
}
# JMeter 테스트 실행
run_jmeter_test() {
local test_file=$1
local test_name=$(basename "$test_file" .jmx)
log_info "JMeter 테스트 실행: $test_name"
# JMeter 실행
"$JMETER_HOME/bin/jmeter" \
-n \
-t "$test_file" \
-l "$RESULTS_DIR/$TIMESTAMP/${test_name}-results.jtl" \
-e \
-o "$RESULTS_DIR/$TIMESTAMP/${test_name}-report" \
-Jjmeter.save.saveservice.output_format=csv \
-Jjmeter.save.saveservice.assertion_results_failure_message=true \
-Jjmeter.save.saveservice.data_type=true \
-Jjmeter.save.saveservice.label=true \
-Jjmeter.save.saveservice.response_code=true \
-Jjmeter.save.saveservice.response_data.on_error=true \
-Jjmeter.save.saveservice.response_message=true \
-Jjmeter.save.saveservice.successful=true \
-Jjmeter.save.saveservice.thread_name=true \
-Jjmeter.save.saveservice.time=true \
-Jjmeter.save.saveservice.connect_time=true \
-Jjmeter.save.saveservice.latency=true \
-Jjmeter.save.saveservice.bytes=true \
-Jjmeter.save.saveservice.sent_bytes=true \
-Jjmeter.save.saveservice.url=true
if [ $? -eq 0 ]; then
log_info "테스트 완료: $test_name"
log_info "결과 파일: $RESULTS_DIR/$TIMESTAMP/${test_name}-results.jtl"
log_info "HTML 리포트: $RESULTS_DIR/$TIMESTAMP/${test_name}-report/index.html"
else
log_error "테스트 실패: $test_name"
return 1
fi
}
# WebSocket 부하 테스트
run_websocket_test() {
log_info "WebSocket 부하 테스트 준비..."
# Python 스크립트로 WebSocket 테스트 실행
cat > "$RESULTS_DIR/$TIMESTAMP/websocket_load_test.py" << 'EOF'
import asyncio
import websockets
import json
import time
from datetime import datetime, timedelta
import statistics
class WebSocketLoadTester:
def __init__(self, base_url, num_clients, queries_per_client):
self.base_url = base_url
self.num_clients = num_clients
self.queries_per_client = queries_per_client
self.metrics = {
'total_queries': 0,
'successful_queries': 0,
'failed_queries': 0,
'latencies': [],
'throughput': []
}
async def client_session(self, client_id):
async with websockets.connect(f"{self.base_url}/ws-tracks") as websocket:
for query_id in range(self.queries_per_client):
try:
# 쿼리 요청 생성
query = {
"startTime": (datetime.now() - timedelta(days=7)).isoformat(),
"endTime": datetime.now().isoformat(),
"viewport": {
"minLon": 124.0,
"maxLon": 132.0,
"minLat": 33.0,
"maxLat": 38.0
},
"chunkSize": 1000
}
start_time = time.time()
await websocket.send(json.dumps(query))
# 응답 수신
chunks_received = 0
while True:
response = await websocket.recv()
data = json.loads(response)
chunks_received += 1
if data.get('isLastChunk', False):
break
end_time = time.time()
latency = (end_time - start_time) * 1000 # ms
self.metrics['latencies'].append(latency)
self.metrics['successful_queries'] += 1
print(f"Client {client_id} - Query {query_id}: {latency:.2f}ms, {chunks_received} chunks")
except Exception as e:
print(f"Client {client_id} - Query {query_id} failed: {str(e)}")
self.metrics['failed_queries'] += 1
self.metrics['total_queries'] += 1
await asyncio.sleep(1) # 쿼리 간 딜레이
async def run_test(self):
print(f"Starting WebSocket load test with {self.num_clients} clients...")
start_time = time.time()
# 모든 클라이언트 동시 실행
tasks = []
for i in range(self.num_clients):
task = asyncio.create_task(self.client_session(i))
tasks.append(task)
await asyncio.gather(*tasks)
end_time = time.time()
total_duration = end_time - start_time
# 결과 분석
print("\n=== 부하 테스트 결과 ===")
print(f"총 실행 시간: {total_duration:.2f}초")
print(f"총 쿼리 수: {self.metrics['total_queries']}")
print(f"성공: {self.metrics['successful_queries']}")
print(f"실패: {self.metrics['failed_queries']}")
if self.metrics['latencies']:
print(f"평균 레이턴시: {statistics.mean(self.metrics['latencies']):.2f}ms")
print(f"최소 레이턴시: {min(self.metrics['latencies']):.2f}ms")
print(f"최대 레이턴시: {max(self.metrics['latencies']):.2f}ms")
print(f"중앙값 레이턴시: {statistics.median(self.metrics['latencies']):.2f}ms")
print(f"처리량: {self.metrics['total_queries'] / total_duration:.2f} queries/sec")
if __name__ == "__main__":
tester = WebSocketLoadTester(
base_url="ws://10.26.252.48:8090",
num_clients=10,
queries_per_client=5
)
asyncio.run(tester.run_test())
EOF
# Python WebSocket 테스트 실행
if command -v python3 &> /dev/null; then
python3 "$RESULTS_DIR/$TIMESTAMP/websocket_load_test.py" > "$RESULTS_DIR/$TIMESTAMP/websocket_test_results.log" 2>&1
else
log_warn "Python3가 설치되어 있지 않아 WebSocket 테스트를 건너뜁니다."
fi
}
# 메인 실행 함수
main() {
log_info "선박 궤적 집계 시스템 부하 테스트 시작"
log_info "타임스탬프: $TIMESTAMP"
# JMeter 확인
check_jmeter
# 결과 디렉토리 생성
create_results_dir
# 시스템 모니터링 시작
start_monitoring
# 애플리케이션 상태 확인
log_info "애플리케이션 상태 확인..."
curl -s "http://10.26.252.48:8090/actuator/health" > "$RESULTS_DIR/$TIMESTAMP/app_health_before.json"
# JMeter 테스트 실행
if [ -f "$PROJECT_ROOT/src/main/resources/jmeter/comprehensive-load-test.jmx" ]; then
run_jmeter_test "$PROJECT_ROOT/src/main/resources/jmeter/comprehensive-load-test.jmx"
fi
# WebSocket 테스트 실행
run_websocket_test
# 10분간 부하 테스트 실행
log_info "부하 테스트 진행 중... (10분)"
sleep 600
# 시스템 모니터링 중지
stop_monitoring
# 최종 애플리케이션 상태 확인
curl -s "http://10.26.252.48:8090/actuator/health" > "$RESULTS_DIR/$TIMESTAMP/app_health_after.json"
# 결과 요약
log_info "부하 테스트 완료!"
log_info "결과 디렉토리: $RESULTS_DIR/$TIMESTAMP"
# 간단한 결과 분석
if [ -f "$RESULTS_DIR/$TIMESTAMP/comprehensive-load-test-results.jtl" ]; then
log_info "JMeter 결과 요약:"
awk -F',' 'NR>1 {sum+=$2; count++} END {print "평균 응답 시간: " sum/count " ms"}' "$RESULTS_DIR/$TIMESTAMP/comprehensive-load-test-results.jtl"
fi
}
# 스크립트 실행
main "$@"

파일 보기

@ -0,0 +1,190 @@
#!/bin/bash
# Query DB 서버에서 최적화된 실행 스크립트
# Rocky Linux 환경에 맞춰 조정됨
# Java 17 경로 명시적 지정
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 로그 디렉토리
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
echo "================================================"
echo "Vessel Batch Aggregation - Query Server Edition"
echo "Start Time: $(date)"
echo "================================================"
# 경로 확인
echo "Environment Check:"
echo "- App Home: $APP_HOME"
echo "- JAR File: $JAR_FILE"
echo "- Java Path: $JAVA_BIN"
echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
# JAR 파일 존재 확인
if [ ! -f "$JAR_FILE" ]; then
echo "ERROR: JAR file not found at $JAR_FILE"
exit 1
fi
# Java 실행 파일 확인
if [ ! -x "$JAVA_BIN" ]; then
echo "ERROR: Java not found or not executable at $JAVA_BIN"
exit 1
fi
# 서버 정보 확인
echo ""
echo "Server Info:"
echo "- Hostname: $(hostname)"
echo "- CPU Cores: $(nproc)"
echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
# 환경 변수 설정 (localhost 최적화)
export SPRING_PROFILES_ACTIVE=prod
# Query DB와 Batch Meta DB를 localhost로 오버라이드
export SPRING_DATASOURCE_QUERY_JDBC_URL="jdbc:postgresql://10.29.17.90:5432/mpcdb2?options=-csearch_path=signal,public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
export SPRING_DATASOURCE_BATCH_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
# 서버 CPU 코어 수에 따른 병렬 처리 조정
CPU_CORES=$(nproc)
export VESSEL_BATCH_PARTITION_SIZE=$((CPU_CORES * 2))
export VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS=$((CPU_CORES / 2))
echo ""
echo "Optimized Settings:"
echo "- Partition Size: $VESSEL_BATCH_PARTITION_SIZE"
echo "- Parallel Threads: $VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS"
echo "- Query DB: localhost (optimized)"
echo "- Batch Meta DB: localhost (optimized)"
# JVM 옵션 (서버 메모리에 맞게 조정)
TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
JVM_HEAP=$((TOTAL_MEM / 4)) # 전체 메모리의 25% 사용
# 최소 16GB, 최대 64GB로 제한
if [ $JVM_HEAP -lt 16 ]; then
JVM_HEAP=16
elif [ $JVM_HEAP -gt 64 ]; then
JVM_HEAP=64
fi
JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
-XX:+UseG1GC \
-XX:G1HeapRegionSize=32m \
-XX:MaxGCPauseMillis=200 \
-XX:InitiatingHeapOccupancyPercent=35 \
-XX:G1ReservePercent=15 \
-XX:+UseStringDeduplication \
-XX:+ParallelRefProcEnabled \
-XX:+ExplicitGCInvokesConcurrent \
-XX:ParallelGCThreads=$((CPU_CORES / 2)) \
-XX:ConcGCThreads=$((CPU_CORES / 4)) \
-XX:MaxMetaspaceSize=512m \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
-Xlog:gc*:file=$LOG_DIR/gc.log:time,uptime,level,tags:filecount=5,filesize=100M \
-Dfile.encoding=UTF-8 \
-Duser.timezone=Asia/Seoul \
-Djava.security.egd=file:/dev/./urandom \
-Dspring.profiles.active=prod"
echo "- JVM Heap Size: ${JVM_HEAP}GB"
# 기존 프로세스 확인 및 종료
echo ""
echo "Checking for existing process..."
PID=$(pgrep -f "$JAR_FILE")
if [ ! -z "$PID" ]; then
echo "Stopping existing process (PID: $PID)..."
kill -15 $PID
# 프로세스 종료 대기 (최대 30초)
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo "Process stopped successfully."
break
fi
if [ $i -eq 30 ]; then
echo "Force killing process..."
kill -9 $PID
fi
sleep 1
done
fi
# 작업 디렉토리로 이동
cd $APP_HOME
# 애플리케이션 실행 (nice로 우선순위 조정)
echo ""
echo "Starting application with reduced priority..."
echo "Command: nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
echo ""
# nohup으로 백그라운드 실행
nohup nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
> $LOG_DIR/app.log 2>&1 &
NEW_PID=$!
echo "Application started with PID: $NEW_PID"
# PID 파일 생성
echo $NEW_PID > $APP_HOME/vessel-batch.pid
# 시작 확인 (30초 대기)
echo "Waiting for application startup..."
STARTUP_SUCCESS=false
for i in {1..30}; do
if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
echo "✅ Application started successfully!"
STARTUP_SUCCESS=true
break
fi
echo -n "."
sleep 1
done
if [ "$STARTUP_SUCCESS" = false ]; then
echo ""
echo "⚠️ Application startup timeout. Check logs for errors."
echo "Log file: $LOG_DIR/app.log"
tail -20 $LOG_DIR/app.log
fi
echo ""
echo "================================================"
echo "Deployment Complete!"
echo "- PID: $NEW_PID"
echo "- PID File: $APP_HOME/vessel-batch.pid"
echo "- Log: $LOG_DIR/app.log"
echo "- Monitor: tail -f $LOG_DIR/app.log"
echo "================================================"
# 초기 상태 확인
sleep 5
echo ""
echo "Initial Status Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
# 리소스 사용량 표시
echo ""
echo "Resource Usage:"
ps aux | grep $NEW_PID | grep -v grep
# 빠른 명령어 안내
echo ""
echo "Useful Commands:"
echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-batch.pid)"
echo "- Logs: tail -f $LOG_DIR/app.log"
echo "- Status: curl http://localhost:8090/actuator/health"
echo "- Monitor: $APP_HOME/monitor-query-server.sh"

파일 보기

@ -0,0 +1,184 @@
#!/bin/bash
# Query 전용 서버 실행 스크립트 (10.29.17.90)
# 배치 Job 없이 조회 API만 제공
# Java 17 경로 명시적 지정
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 로그 디렉토리
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
echo "================================================"
echo "Vessel Query API Server - Query Only Mode"
echo "Start Time: $(date)"
echo "================================================"
# 경로 확인
echo "Environment Check:"
echo "- App Home: $APP_HOME"
echo "- JAR File: $JAR_FILE"
echo "- Java Path: $JAVA_BIN"
echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
# JAR 파일 존재 확인
if [ ! -f "$JAR_FILE" ]; then
echo "ERROR: JAR file not found at $JAR_FILE"
exit 1
fi
# Java 실행 파일 확인
if [ ! -x "$JAVA_BIN" ]; then
echo "ERROR: Java not found or not executable at $JAVA_BIN"
exit 1
fi
# 서버 정보 확인
echo ""
echo "Server Info:"
echo "- Hostname: $(hostname)"
echo "- CPU Cores: $(nproc)"
echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
# 환경 변수 설정 (query 프로파일 - 배치 비활성화!)
export SPRING_PROFILES_ACTIVE=query
echo ""
echo "Profile Settings:"
echo "- Active Profile: QUERY (Batch Jobs Disabled)"
echo "- Query DB: 10.29.17.90:5432/mpcdb2 (Local DB)"
echo "- Batch Jobs: DISABLED"
echo "- Scheduler: DISABLED"
# JVM 옵션 (서버 메모리에 맞게 조정)
TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
JVM_HEAP=$((TOTAL_MEM / 8)) # 전체 메모리의 12.5% 사용 (배치 없으므로 적게)
# 최소 4GB, 최대 16GB로 제한
if [ $JVM_HEAP -lt 4 ]; then
JVM_HEAP=4
elif [ $JVM_HEAP -gt 16 ]; then
JVM_HEAP=16
fi
CPU_CORES=$(nproc)
JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
-XX:+UseG1GC \
-XX:G1HeapRegionSize=32m \
-XX:MaxGCPauseMillis=200 \
-XX:InitiatingHeapOccupancyPercent=35 \
-XX:G1ReservePercent=15 \
-XX:+UseStringDeduplication \
-XX:+ParallelRefProcEnabled \
-XX:+ExplicitGCInvokesConcurrent \
-XX:ParallelGCThreads=$((CPU_CORES / 2)) \
-XX:ConcGCThreads=$((CPU_CORES / 4)) \
-XX:MaxMetaspaceSize=512m \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
-Xlog:gc*:file=$LOG_DIR/gc.log:time,uptime,level,tags:filecount=5,filesize=100M \
-Dfile.encoding=UTF-8 \
-Duser.timezone=Asia/Seoul \
-Djava.security.egd=file:/dev/./urandom \
-Dspring.profiles.active=query"
echo "- JVM Heap Size: ${JVM_HEAP}GB"
# 기존 프로세스 확인 및 종료
echo ""
echo "Checking for existing process..."
PID=$(pgrep -f "$JAR_FILE")
if [ ! -z "$PID" ]; then
echo "Stopping existing process (PID: $PID)..."
kill -15 $PID
# 프로세스 종료 대기 (최대 30초)
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo "Process stopped successfully."
break
fi
if [ $i -eq 30 ]; then
echo "Force killing process..."
kill -9 $PID
fi
sleep 1
done
fi
# 작업 디렉토리로 이동
cd $APP_HOME
# 애플리케이션 실행
echo ""
echo "Starting application in QUERY-ONLY mode..."
echo "Command: $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
echo ""
# nohup으로 백그라운드 실행
nohup $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
> $LOG_DIR/app.log 2>&1 &
NEW_PID=$!
echo "Application started with PID: $NEW_PID"
# PID 파일 생성
echo $NEW_PID > $APP_HOME/vessel-query.pid
# 시작 확인 (30초 대기)
echo "Waiting for application startup..."
STARTUP_SUCCESS=false
for i in {1..30}; do
if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
echo "✅ Application started successfully!"
STARTUP_SUCCESS=true
break
fi
echo -n "."
sleep 1
done
if [ "$STARTUP_SUCCESS" = false ]; then
echo ""
echo "⚠️ Application startup timeout. Check logs for errors."
echo "Log file: $LOG_DIR/app.log"
tail -20 $LOG_DIR/app.log
fi
echo ""
echo "================================================"
echo "Deployment Complete!"
echo "- Mode: QUERY ONLY (No Batch Jobs)"
echo "- PID: $NEW_PID"
echo "- PID File: $APP_HOME/vessel-query.pid"
echo "- Log: $LOG_DIR/app.log"
echo "- Monitor: tail -f $LOG_DIR/app.log"
echo "================================================"
# 초기 상태 확인
sleep 5
echo ""
echo "Initial Status Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
# 리소스 사용량 표시
echo ""
echo "Resource Usage:"
ps aux | grep $NEW_PID | grep -v grep
# 빠른 명령어 안내
echo ""
echo "Useful Commands:"
echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-query.pid)"
echo "- Logs: tail -f $LOG_DIR/app.log"
echo "- Status: curl http://localhost:8090/actuator/health"
echo "- API Test: curl http://localhost:8090/api/gis/areas"

40
scripts/server-logs.bat Normal file
파일 보기

@ -0,0 +1,40 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Server Log Viewer
REM ===============================================
setlocal
set SERVER_IP=10.26.252.48
set SERVER_USER=root
set SERVER_PATH=/devdata/apps/bridge-db-monitoring
echo ===============================================
echo Signal Batch Server Log Viewer
echo ===============================================
echo Server: %SERVER_IP%
echo Time: %date% %time%
echo.
if "%1"=="tail" (
echo Starting real-time log monitoring... (Ctrl+C to exit)
ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh logs"
) else if "%1"=="errors" (
echo Retrieving recent error logs...
ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh errors"
) else if "%1"=="stats" (
echo Retrieving performance statistics...
ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh stats"
) else (
echo Usage:
echo server-logs.bat - Show recent 50 lines
echo server-logs.bat tail - Real-time log monitoring
echo server-logs.bat errors - Show error logs only
echo server-logs.bat stats - Show performance statistics
echo.
echo Recent 50 lines of log:
ssh %SERVER_USER%@%SERVER_IP% "tail -50 %SERVER_PATH%/logs/app.log 2>/dev/null || echo 'Log file not available'"
)
endlocal

64
scripts/server-status.bat Normal file
파일 보기

@ -0,0 +1,64 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Server Status Checker
REM ===============================================
setlocal enabledelayedexpansion
REM Configuration
set "SERVER_IP=10.26.252.48"
set "SERVER_USER=root"
set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
echo ===============================================
echo Signal Batch Server Status
echo ===============================================
echo [INFO] Query Time: !date! !time!
echo [INFO] Target Server: !SERVER_IP!
REM 1. Server Connection Test
echo.
echo =============== Server Connection Test ===============
ssh !SERVER_USER!@!SERVER_IP! "echo 'Server connection OK'" 2>nul
set CONNECTION_RESULT=!ERRORLEVEL!
if !CONNECTION_RESULT! neq 0 (
echo [ERROR] Server connection failed
exit /b 1
)
echo [INFO] Server connection successful
REM 2. Application Status
echo.
echo =============== Application Status ===============
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
REM 3. Additional Status Information
echo.
echo =============== Additional Status Information ===============
REM Health Check
echo [INFO] Health Check:
ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/health --max-time 5 2>/dev/null | python -m json.tool 2>/dev/null || echo 'Health endpoint not available'"
echo.
REM Metrics Information
echo [INFO] Metrics Information:
ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/metrics --max-time 5 2>/dev/null | head -20 || echo 'Metrics endpoint not available'"
echo.
REM Disk Usage
echo [INFO] Disk Usage:
ssh !SERVER_USER!@!SERVER_IP! "df -h !SERVER_PATH!"
echo.
REM Memory Usage
echo [INFO] Memory Usage:
ssh !SERVER_USER!@!SERVER_IP! "free -h"
echo.
REM Recent Log Check
echo [INFO] Recent Logs (last 10 lines):
ssh !SERVER_USER!@!SERVER_IP! "tail -10 !SERVER_PATH!/logs/app.log 2>/dev/null || echo 'Log file not available'"
endlocal

59
scripts/setup-ssh-key.bat Normal file
파일 보기

@ -0,0 +1,59 @@
@echo off
chcp 65001 >nul
echo ===============================================
echo SSH Key Setup for Server Deployment
echo ===============================================
set "SERVER_IP=10.26.252.51"
set "SERVER_USER=root"
echo [INFO] Setting up SSH key authentication for %SERVER_USER%@%SERVER_IP%
echo.
REM Check if SSH key exists
if not exist "%USERPROFILE%\.ssh\id_rsa.pub" (
echo [INFO] SSH key not found. Generating new SSH key...
ssh-keygen -t rsa -b 4096 -f "%USERPROFILE%\.ssh\id_rsa" -N ""
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to generate SSH key
pause
exit /b 1
)
echo [SUCCESS] SSH key generated
)
echo.
echo [INFO] Copying SSH key to server...
echo [INFO] You will be prompted for the server password
echo.
type "%USERPROFILE%\.ssh\id_rsa.pub" | ssh %SERVER_USER%@%SERVER_IP% "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys && echo '[SUCCESS] SSH key installed'"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to copy SSH key
echo.
echo Please ensure:
echo - Server is accessible at %SERVER_IP%
echo - You have the correct password for %SERVER_USER%
echo - SSH service is running on the server
pause
exit /b 1
)
echo.
echo ===============================================
echo [SUCCESS] SSH Key Setup Complete!
echo ===============================================
echo.
echo Testing connection...
ssh -o BatchMode=yes -o ConnectTimeout=10 %SERVER_USER%@%SERVER_IP% "echo '[SUCCESS] SSH key authentication working!'"
if !ERRORLEVEL! equ 0 (
echo.
echo You can now run deploy-only.bat without password
) else (
echo [WARN] Key authentication test failed
echo Please try running this script again
)
pause

파일 보기

@ -0,0 +1,67 @@
-- 실행 중인(STARTED) 배치 Job과 Step을 강제 종료
-- 주의: 실제로 실행 중인 프로세스를 종료하지는 않습니다.
-- DB 상태만 변경하므로, 애플리케이션을 먼저 중지한 후 사용하세요.
-- 1. 현재 실행 중인 Job 확인
SELECT
'=== RUNNING JOBS ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
STATUS,
(SELECT JOB_NAME FROM BATCH_JOB_INSTANCE WHERE JOB_INSTANCE_ID = bje.JOB_INSTANCE_ID) as JOB_NAME
FROM BATCH_JOB_EXECUTION bje
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING')
ORDER BY START_TIME DESC;
-- 2. 실행 중인 Step 확인
SELECT
'=== RUNNING STEPS ===' as status,
bse.STEP_EXECUTION_ID,
bse.JOB_EXECUTION_ID,
bse.STEP_NAME,
bse.STATUS,
bse.START_TIME
FROM BATCH_STEP_EXECUTION bse
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING')
ORDER BY START_TIME DESC;
-- 3. 실행 중인 Step을 STOPPED로 변경
UPDATE BATCH_STEP_EXECUTION
SET
STATUS = 'STOPPED',
EXIT_CODE = 'STOPPED',
EXIT_MESSAGE = 'Manually stopped - Original status: ' || STATUS,
END_TIME = CURRENT_TIMESTAMP,
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
-- 4. 실행 중인 Job을 STOPPED로 변경
UPDATE BATCH_JOB_EXECUTION
SET
STATUS = 'STOPPED',
EXIT_CODE = 'STOPPED',
EXIT_MESSAGE = 'Manually stopped - Original status: ' || STATUS,
END_TIME = CURRENT_TIMESTAMP,
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
-- 5. 결과 확인
SELECT
'=== AFTER STOP ===' as status,
COUNT(*) as running_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
SELECT
'=== STOPPED JOBS ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'STOPPED'
ORDER BY JOB_EXECUTION_ID DESC
LIMIT 10;

170
scripts/sync-nexus.sh Normal file
파일 보기

@ -0,0 +1,170 @@
#!/bin/bash
# =============================================================================
# sync-nexus.sh - 로컬 Maven 의존성을 Nexus에 동기화
#
# 사용법:
# ./scripts/sync-nexus.sh # 실제 업로드
# ./scripts/sync-nexus.sh --dry-run # 업로드 대상만 확인
# =============================================================================
set -eo pipefail
# --- SDKMAN 초기화 (set -u 전에 실행) ---
if [ -f "$HOME/.sdkman/bin/sdkman-init.sh" ]; then
source "$HOME/.sdkman/bin/sdkman-init.sh" 2>/dev/null || true
fi
# --- 설정 ---
NEXUS_URL="http://10.26.252.39:8081"
REPO_ID="mda-backend-repository"
NEXUS_USER="admin"
NEXUS_PASS="8932"
LOCAL_REPO="$HOME/.m2/repository"
# --- 옵션 파싱 ---
DRY_RUN=false
if [[ "${1:-}" == "--dry-run" ]]; then
DRY_RUN=true
echo "=== DRY RUN 모드 (업로드하지 않음) ==="
fi
# --- 카운터 ---
TOTAL=0
SKIPPED=0
UPLOADED=0
FAILED=0
# Nexus에 아티팩트 존재 여부 확인 (HTTP HEAD로 .pom 파일 체크)
check_exists() {
local group_path=$1
local artifact_id=$2
local version=$3
local pom_url="${NEXUS_URL}/repository/${REPO_ID}/${group_path}/${artifact_id}/${version}/${artifact_id}-${version}.pom"
local http_code
http_code=$(curl -s -o /dev/null -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" --connect-timeout 5 "$pom_url" < /dev/null)
[[ "$http_code" == "200" ]]
}
# 파일 업로드 (HTTP PUT)
upload_file() {
local file_path=$1
local remote_path=$2
local url="${NEXUS_URL}/repository/${REPO_ID}/${remote_path}"
if [ ! -f "$file_path" ]; then
return 1
fi
local http_code
http_code=$(curl -s -o /dev/null -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" --upload-file "$file_path" --connect-timeout 10 --max-time 120 "$url" < /dev/null)
[[ "$http_code" == "201" || "$http_code" == "200" ]]
}
# 아티팩트 업로드 (pom + jar + 기타)
upload_artifact() {
local group_id=$1
local artifact_id=$2
local version=$3
local packaging=$4
local group_path
group_path=$(echo "$group_id" | tr '.' '/')
local base_dir="${LOCAL_REPO}/${group_path}/${artifact_id}/${version}"
local base_name="${artifact_id}-${version}"
local remote_base="${group_path}/${artifact_id}/${version}"
local success=true
# POM 업로드 (필수)
local pom_file="${base_dir}/${base_name}.pom"
if [ -f "$pom_file" ]; then
if upload_file "$pom_file" "${remote_base}/${base_name}.pom"; then
:
else
echo " [FAIL] POM 업로드 실패"
success=false
fi
fi
# JAR 업로드 (pom 패키징이 아닌 경우)
if [[ "$packaging" != "pom" ]]; then
local jar_file="${base_dir}/${base_name}.${packaging}"
if [ -f "$jar_file" ]; then
if upload_file "$jar_file" "${remote_base}/${base_name}.${packaging}"; then
:
else
echo " [FAIL] ${packaging} 업로드 실패"
success=false
fi
fi
fi
$success
}
echo ""
echo "=== Nexus 동기화 시작 ==="
echo " Nexus: ${NEXUS_URL}/repository/${REPO_ID}"
echo " 로컬: ${LOCAL_REPO}"
echo ""
# Nexus 연결 확인
if ! curl -s -o /dev/null -w "" -u "${NEXUS_USER}:${NEXUS_PASS}" --connect-timeout 5 "${NEXUS_URL}/service/rest/v1/repositories" 2>/dev/null; then
echo "[ERROR] Nexus(${NEXUS_URL})에 연결할 수 없습니다."
exit 1
fi
echo "[OK] Nexus 연결 확인"
echo ""
# Maven dependency:list로 GAV 목록 추출
echo "의존성 목록 추출 중..."
DEP_LIST=$(mvn dependency:list -DoutputAbsoluteArtifactFilename=true 2>/dev/null | grep "^\[INFO\] " | sed 's/\[INFO\] //' | sed 's/ -- .*//')
echo ""
echo "--- 동기화 진행 ---"
while IFS= read -r line; do
# 형식: groupId:artifactId:packaging:version:scope:/path/to/file
IFS=':' read -r group_id artifact_id packaging version scope rest <<< "$line"
if [[ -z "$group_id" || -z "$artifact_id" || -z "$version" ]]; then
continue
fi
TOTAL=$((TOTAL + 1))
local_group_path=$(echo "$group_id" | tr '.' '/')
# Nexus 존재 여부 확인
if check_exists "$local_group_path" "$artifact_id" "$version"; then
SKIPPED=$((SKIPPED + 1))
continue
fi
# 신규 아티팩트 발견
echo "[NEW] ${group_id}:${artifact_id}:${version} (${packaging})"
if $DRY_RUN; then
UPLOADED=$((UPLOADED + 1))
else
if upload_artifact "$group_id" "$artifact_id" "$version" "$packaging"; then
echo " -> 업로드 완료"
UPLOADED=$((UPLOADED + 1))
else
echo " -> 업로드 실패"
FAILED=$((FAILED + 1))
fi
fi
done <<< "$DEP_LIST"
echo ""
echo "=== 동기화 완료 ==="
echo " 전체: ${TOTAL}"
echo " 스킵 (이미 존재): ${SKIPPED}"
if $DRY_RUN; then
echo " 업로드 대상: ${UPLOADED}"
else
echo " 업로드 성공: ${UPLOADED}"
echo " 업로드 실패: ${FAILED}"
fi
echo ""

파일 보기

@ -0,0 +1,135 @@
-- t_abnormal_tracks 테스트용 INSERT 쿼리
-- PostGIS ST_GeomFromText 함수 테스트
-- 1. 기본 테스트 (track_geom 컬럼 사용)
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd,
target_id,
time_bucket,
track_geom,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table
) VALUES (
'AIS', -- sig_src_cd
'TEST_VESSEL_001', -- target_id
'2025-10-10 12:00:00'::timestamp, -- time_bucket
ST_GeomFromText('LINESTRING M(126.0 37.0 1728547200, 126.1 37.1 1728547260)', 4326), -- track_geom (LineString M 타입)
'EXCESSIVE_SPEED', -- abnormal_type
'{"reason": "Speed exceeds 200 knots", "detected_speed": 250.5}'::jsonb, -- abnormal_reason
15.5, -- distance_nm
180.3, -- avg_speed
250.5, -- max_speed
10, -- point_count
'hourly' -- source_table
)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
DO UPDATE SET
track_geom = EXCLUDED.track_geom,
abnormal_type = EXCLUDED.abnormal_type,
abnormal_reason = EXCLUDED.abnormal_reason,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
detected_at = NOW();
-- 2. track_geom_v2 컬럼을 사용하는 경우
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd,
target_id,
time_bucket,
track_geom_v2,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table
) VALUES (
'LRIT', -- sig_src_cd
'TEST_VESSEL_002', -- target_id
'2025-10-10 13:00:00'::timestamp, -- time_bucket
ST_GeomFromText('LINESTRING M(127.0 38.0 1728550800, 127.2 38.2 1728550860, 127.4 38.4 1728550920)', 4326), -- track_geom_v2
'UNREALISTIC_DISTANCE', -- abnormal_type
'{"reason": "Distance too large for time interval", "distance_nm": 120.0, "time_interval_minutes": 5}'::jsonb, -- abnormal_reason
120.0, -- distance_nm
1440.0, -- avg_speed (120nm / 5min = 1440 knots)
1500.0, -- max_speed
3, -- point_count
'5min' -- source_table
)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
DO UPDATE SET
track_geom_v2 = EXCLUDED.track_geom_v2,
abnormal_type = EXCLUDED.abnormal_type,
abnormal_reason = EXCLUDED.abnormal_reason,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
detected_at = NOW();
-- 3. public 스키마를 명시적으로 지정한 버전
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd,
target_id,
time_bucket,
track_geom,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table
) VALUES (
'VPASS', -- sig_src_cd
'TEST_VESSEL_003', -- target_id
'2025-10-10 14:00:00'::timestamp, -- time_bucket
public.ST_GeomFromText('LINESTRING M(128.0 36.0 1728554400, 128.1 36.1 1728554460)', 4326), -- public 스키마 명시
'SUDDEN_DIRECTION_CHANGE', -- abnormal_type
'{"reason": "Unrealistic turn angle", "angle_degrees": 175}'::jsonb, -- abnormal_reason
8.5, -- distance_nm
102.0, -- avg_speed
120.0, -- max_speed
2, -- point_count
'hourly' -- source_table
)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
DO UPDATE SET
track_geom = EXCLUDED.track_geom,
abnormal_type = EXCLUDED.abnormal_type,
abnormal_reason = EXCLUDED.abnormal_reason,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
detected_at = NOW();
-- 4. 검증 쿼리
SELECT
sig_src_cd,
target_id,
time_bucket,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table,
ST_AsText(track_geom) as track_geom_wkt,
ST_AsText(track_geom_v2) as track_geom_v2_wkt,
detected_at
FROM signal.t_abnormal_tracks
WHERE target_id LIKE 'TEST_VESSEL_%'
ORDER BY time_bucket DESC;
-- 5. 정리 (테스트 데이터 삭제)
-- DELETE FROM signal.t_abnormal_tracks WHERE target_id LIKE 'TEST_VESSEL_%';

파일 보기

@ -0,0 +1,496 @@
-- ========================================
-- 일별 집계 쿼리 검증 스크립트
-- CAST 및 타입 호환성 테스트
-- ========================================
-- 1. 임시 테스트 테이블 생성
DROP TABLE IF EXISTS test_vessel_tracks_hourly_for_daily CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_daily CASCADE;
CREATE TABLE test_vessel_tracks_hourly_for_daily (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
CREATE TABLE test_vessel_tracks_daily (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
-- 2. 샘플 데이터 삽입 (하루치 시간별 데이터)
-- 시나리오 1: 정상 이동 선박 (24시간 중 일부)
INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
(
'000001',
'TEST001',
'2025-01-07 00:00:00',
public.ST_GeomFromText('LINESTRING M(126.5 37.5 1736179200, 126.52 37.52 1736182800)', 4326),
5.5,
10.5,
12.0,
12,
'{"lat": 37.5, "lon": 126.5, "time": "2025-01-07 00:00:00", "sog": 10.5}'::jsonb,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 01:00:00", "sog": 11.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 01:00:00',
public.ST_GeomFromText('LINESTRING M(126.52 37.52 1736182800, 126.54 37.54 1736186400)', 4326),
6.0,
11.0,
13.0,
12,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 01:00:00", "sog": 11.0}'::jsonb,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 02:00:00", "sog": 12.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 02:00:00',
public.ST_GeomFromText('LINESTRING M(126.54 37.54 1736186400, 126.56 37.56 1736190000)', 4326),
5.8,
10.8,
12.5,
12,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 02:00:00", "sog": 10.8}'::jsonb,
'{"lat": 37.56, "lon": 126.56, "time": "2025-01-07 03:00:00", "sog": 11.5}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 03:00:00',
public.ST_GeomFromText('LINESTRING M(126.56 37.56 1736190000, 126.58 37.58 1736193600)', 4326),
6.2,
11.2,
13.5,
12,
'{"lat": 37.56, "lon": 126.56, "time": "2025-01-07 03:00:00", "sog": 11.2}'::jsonb,
'{"lat": 37.58, "lon": 126.58, "time": "2025-01-07 04:00:00", "sog": 12.5}'::jsonb
);
-- 시나리오 2: 정박 선박
INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
(
'000002',
'TEST002',
'2025-01-07 00:00:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736179200, 129.0 35.0 1736182800)', 4326),
0.0,
0.0,
0.5,
24,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 01:00:00", "sog": 0.0}'::jsonb
),
(
'000002',
'TEST002',
'2025-01-07 01:00:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736182800, 129.0 35.0 1736186400)', 4326),
0.0,
0.0,
0.3,
24,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 01:00:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 02:00:00", "sog": 0.0}'::jsonb
);
-- 시나리오 3: 단일 시간 데이터
INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
(
'000003',
'TEST003',
'2025-01-07 00:00:00',
public.ST_GeomFromText('LINESTRING M(130.0 36.0 1736179200, 130.0 36.0 1736179200)', 4326),
0.0,
0.0,
0.0,
2,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb
);
-- 3. 입력 데이터 검증
SELECT
'=== INPUT DATA VALIDATION ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_hourly_for_daily
ORDER BY sig_src_cd, target_id, time_bucket;
-- 4. 실제 DailyTrackProcessor SQL 실행 (CAST 사용)
-- Vessel: 000001_TEST001, Day: 2025-01-07
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== DAILY AGGREGATION RESULT (VESSEL 000001_TEST001) ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text
FROM calculated_tracks;
-- 5. INSERT 테스트 (CAST 호환성 검증)
INSERT INTO test_vessel_tracks_daily
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 6. 정박 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_daily
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000002'
AND target_id = 'TEST002'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 7. 단일 시간 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_daily
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000003'
AND target_id = 'TEST003'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 8. 최종 결과 검증
SELECT
'=== FINAL DAILY AGGREGATION RESULTS ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
distance_nm,
avg_speed,
max_speed,
point_count,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_daily
ORDER BY sig_src_cd, target_id;
-- 9. 타입 검증
SELECT
'=== DATA TYPE VALIDATION ===' as section,
pg_typeof(time_bucket) as time_bucket_type,
pg_typeof(track_geom) as track_geom_type,
pg_typeof(distance_nm) as distance_type,
pg_typeof(avg_speed) as avg_speed_type,
pg_typeof(max_speed) as max_speed_type,
pg_typeof(point_count) as point_count_type,
pg_typeof(start_position) as start_position_type
FROM test_vessel_tracks_daily
LIMIT 1;
-- 10. 시간 순서 검증 (M값이 증가하는지 확인)
SELECT
'=== TIME ORDERING VALIDATION ===' as section,
sig_src_cd,
target_id,
public.ST_M(public.ST_PointN(track_geom, 1)) as first_m_value,
public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) as last_m_value,
CASE
WHEN public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) >=
public.ST_M(public.ST_PointN(track_geom, 1))
THEN 'PASS'
ELSE 'FAIL'
END as time_order_check
FROM test_vessel_tracks_daily;
-- 11. 정리
DROP TABLE IF EXISTS test_vessel_tracks_hourly_for_daily CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_daily CASCADE;
-- ========================================
-- 테스트 완료
-- 모든 INSERT가 성공하고 타입 에러가 없으면 CAST 사용이 정상
-- ========================================

파일 보기

@ -0,0 +1,484 @@
-- ========================================
-- 시간별 집계 쿼리 검증 스크립트
-- CAST 및 타입 호환성 테스트
-- ========================================
-- 1. 임시 테스트 테이블 생성
DROP TABLE IF EXISTS test_vessel_tracks_5min CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_hourly CASCADE;
CREATE TABLE test_vessel_tracks_5min (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
CREATE TABLE test_vessel_tracks_hourly (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
-- 2. 샘플 데이터 삽입 (1시간치 5분 간격 데이터)
-- 시나리오 1: 정상 이동 선박
INSERT INTO test_vessel_tracks_5min VALUES
(
'000001',
'TEST001',
'2025-01-07 10:00:00',
public.ST_GeomFromText('LINESTRING M(126.5 37.5 1736215200, 126.51 37.51 1736215260, 126.52 37.52 1736215320)', 4326),
0.5,
10.5,
12.0,
3,
'{"lat": 37.5, "lon": 126.5, "time": "2025-01-07 10:00:00", "sog": 10.5}'::jsonb,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 10:02:00", "sog": 11.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 10:05:00',
public.ST_GeomFromText('LINESTRING M(126.52 37.52 1736215500, 126.53 37.53 1736215560, 126.54 37.54 1736215620)', 4326),
0.6,
11.0,
13.0,
3,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 10:05:00", "sog": 11.0}'::jsonb,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 10:07:00", "sog": 12.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 10:10:00',
public.ST_GeomFromText('LINESTRING M(126.54 37.54 1736215800, 126.55 37.55 1736215860)', 4326),
0.4,
9.5,
11.0,
2,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 10:10:00", "sog": 9.5}'::jsonb,
'{"lat": 37.55, "lon": 126.55, "time": "2025-01-07 10:11:00", "sog": 10.0}'::jsonb
);
-- 시나리오 2: 정박 선박 (같은 좌표 반복)
INSERT INTO test_vessel_tracks_5min VALUES
(
'000002',
'TEST002',
'2025-01-07 10:00:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736215200, 129.0 35.0 1736215260)', 4326),
0.0,
0.0,
0.5,
2,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:01:00", "sog": 0.0}'::jsonb
),
(
'000002',
'TEST002',
'2025-01-07 10:05:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736215500, 129.0 35.0 1736215560)', 4326),
0.0,
0.0,
0.3,
2,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:05:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:06:00", "sog": 0.0}'::jsonb
);
-- 시나리오 3: 단일 포인트 (중복 포인트로 유효한 LineString)
INSERT INTO test_vessel_tracks_5min VALUES
(
'000003',
'TEST003',
'2025-01-07 10:00:00',
public.ST_GeomFromText('LINESTRING M(130.0 36.0 1736215200, 130.0 36.0 1736215200)', 4326),
0.0,
0.0,
0.0,
1,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb
);
-- 3. 입력 데이터 검증
SELECT
'=== INPUT DATA VALIDATION ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_5min
ORDER BY sig_src_cd, target_id, time_bucket;
-- 4. 실제 HourlyTrackProcessor SQL 실행 (CAST 사용)
-- Vessel: 000001_TEST001, Hour: 2025-01-07 10:00:00
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== HOURLY AGGREGATION RESULT (VESSEL 000001_TEST001) ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text
FROM calculated_tracks;
-- 5. INSERT 테스트 (CAST 호환성 검증)
INSERT INTO test_vessel_tracks_hourly
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 6. 정박 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_hourly
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000002'
AND target_id = 'TEST002'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 7. 단일 포인트 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_hourly
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000003'
AND target_id = 'TEST003'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 8. 최종 결과 검증
SELECT
'=== FINAL HOURLY AGGREGATION RESULTS ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
distance_nm,
avg_speed,
max_speed,
point_count,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_hourly
ORDER BY sig_src_cd, target_id;
-- 9. 타입 검증
SELECT
'=== DATA TYPE VALIDATION ===' as section,
pg_typeof(time_bucket) as time_bucket_type,
pg_typeof(track_geom) as track_geom_type,
pg_typeof(distance_nm) as distance_type,
pg_typeof(avg_speed) as avg_speed_type,
pg_typeof(max_speed) as max_speed_type,
pg_typeof(point_count) as point_count_type,
pg_typeof(start_position) as start_position_type
FROM test_vessel_tracks_hourly
LIMIT 1;
-- 10. 시간 순서 검증 (M값이 증가하는지 확인)
SELECT
'=== TIME ORDERING VALIDATION ===' as section,
sig_src_cd,
target_id,
public.ST_M(public.ST_PointN(track_geom, 1)) as first_m_value,
public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) as last_m_value,
CASE
WHEN public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) >=
public.ST_M(public.ST_PointN(track_geom, 1))
THEN 'PASS'
ELSE 'FAIL'
END as time_order_check
FROM test_vessel_tracks_hourly;
-- 11. 정리
DROP TABLE IF EXISTS test_vessel_tracks_5min CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_hourly CASCADE;
-- ========================================
-- 테스트 완료
-- 모든 INSERT가 성공하고 타입 에러가 없으면 CAST 사용이 정상
-- ========================================

파일 보기

@ -0,0 +1,274 @@
-- ========================================
-- 실제 테이블 데이터로 CAST 호환성 테스트
-- ========================================
-- 1. 최근 5분 데이터 샘플 확인 (100개)
SELECT
'=== SAMPLE 5MIN DATA ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket DESC
LIMIT 100;
-- 2. 테스트할 선박 선정 (최근 1시간 내 5분 데이터가 있는 선박)
WITH recent_vessels AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', time_bucket) as hour_bucket,
COUNT(*) as record_count,
MIN(time_bucket) as min_time,
MAX(time_bucket) as max_time
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY hour_bucket DESC
LIMIT 10
)
SELECT
'=== TEST CANDIDATE VESSELS ===' as section,
sig_src_cd,
target_id,
hour_bucket,
record_count,
min_time,
max_time
FROM recent_vessels;
-- 3. 특정 선박의 5분 데이터 상세 확인
-- 아래 값들을 위 결과에서 선택해서 수정하세요
-- 예시: sig_src_cd = '000019', target_id = '111440547', hour_bucket = '2025-01-07 10:00:00'
\set test_sig_src_cd '000019'
\set test_target_id '111440547'
\set test_hour_start '''2025-01-07 10:00:00'''
\set test_hour_end '''2025-01-07 11:00:00'''
SELECT
'=== 5MIN DATA FOR TEST VESSEL ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
public.ST_GeometryType(track_geom) as geom_type,
public.ST_AsText(track_geom) as wkt,
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)') as regex_v1,
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
) as regex_v2
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket;
-- 4. string_agg 결과 확인
SELECT
'=== STRING_AGG TEST ===' as section,
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords,
COUNT(*) as track_count
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id;
-- 5. 병합된 WKT로 geometry 생성 테스트
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
)
SELECT
'=== WKT GENERATION TEST ===' as section,
sig_src_cd,
target_id,
'LINESTRING M(' || all_coords || ')' as full_wkt,
LENGTH(all_coords) as coords_length,
public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326) as test_geom,
public.ST_NPoints(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as merged_points,
public.ST_IsValid(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as is_valid
FROM merged_coords;
-- 6. 전체 시간별 집계 쿼리 실행 (SELECT만, INSERT 안함)
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST(:test_hour_start AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== FULL HOURLY AGGREGATION TEST ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text,
time_diff_seconds
FROM calculated_tracks;
-- 7. M값 시간 순서 검증
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom
FROM merged_coords mc
)
SELECT
'=== TIME ORDERING CHECK ===' as section,
sig_src_cd,
target_id,
public.ST_M(public.ST_PointN(merged_geom, 1)) as first_m_value,
to_timestamp(public.ST_M(public.ST_PointN(merged_geom, 1))) as first_time,
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) as last_m_value,
to_timestamp(public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom)))) as last_time,
CASE
WHEN public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) >=
public.ST_M(public.ST_PointN(merged_geom, 1))
THEN 'PASS'
ELSE 'FAIL'
END as time_order_check
FROM merged_tracks;
-- ========================================
-- 사용 방법:
-- 1. 먼저 쿼리 2번 실행해서 테스트할 선박 선택
-- 2. \set 변수 값 수정 (라인 48-51)
-- 3. 전체 스크립트 실행
-- 4. 각 섹션별 결과 확인
-- ========================================

파일 보기

@ -0,0 +1,215 @@
#!/bin/bash
# Vessel Batch 관리 스크립트
# 시작, 중지, 상태 확인 등 기본 관리 기능
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
PID_FILE="$APP_HOME/vessel-batch.pid"
LOG_DIR="$APP_HOME/logs"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 색상 코드
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# 함수: PID 확인
get_pid() {
if [ -f "$PID_FILE" ]; then
PID=$(cat $PID_FILE)
if kill -0 $PID 2>/dev/null; then
echo $PID
else
rm -f $PID_FILE
echo ""
fi
else
PID=$(pgrep -f "$JAR_FILE")
echo $PID
fi
}
# 함수: 상태 확인
status() {
PID=$(get_pid)
if [ ! -z "$PID" ]; then
echo -e "${GREEN}✓ Vessel Batch is running (PID: $PID)${NC}"
# 프로세스 정보
echo ""
ps aux | grep $PID | grep -v grep
# Health Check
echo ""
echo "Health Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not available"
# 처리 상태
echo ""
echo "Processing Status:"
if command -v psql >/dev/null 2>&1; then
psql -h localhost -U mda -d mdadb -c "
SELECT
NOW() - MAX(last_update) as processing_delay,
COUNT(*) as vessel_count
FROM signal.t_vessel_latest_position;" 2>/dev/null || echo "Unable to query database"
fi
return 0
else
echo -e "${RED}✗ Vessel Batch is not running${NC}"
return 1
fi
}
# 함수: 시작
start() {
PID=$(get_pid)
if [ ! -z "$PID" ]; then
echo -e "${YELLOW}Vessel Batch is already running (PID: $PID)${NC}"
return 1
fi
echo "Starting Vessel Batch..."
cd $APP_HOME
$APP_HOME/run-on-query-server-dev.sh
}
# 함수: 중지
stop() {
PID=$(get_pid)
if [ -z "$PID" ]; then
echo -e "${YELLOW}Vessel Batch is not running${NC}"
return 1
fi
echo "Stopping Vessel Batch (PID: $PID)..."
kill -15 $PID
# 종료 대기
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo -e "${GREEN}✓ Vessel Batch stopped successfully${NC}"
rm -f $PID_FILE
return 0
fi
echo -n "."
sleep 1
done
echo ""
echo -e "${RED}Process did not stop gracefully, force killing...${NC}"
kill -9 $PID
rm -f $PID_FILE
}
# 함수: 재시작
restart() {
echo "Restarting Vessel Batch..."
stop
sleep 3
start
}
# 함수: 로그 보기
logs() {
if [ ! -d "$LOG_DIR" ]; then
echo "Log directory not found: $LOG_DIR"
return 1
fi
echo "Available log files:"
ls -lh $LOG_DIR/*.log 2>/dev/null
echo ""
echo "Tailing app.log (Ctrl+C to exit)..."
tail -f $LOG_DIR/app.log
}
# 함수: 최근 에러 확인
errors() {
if [ ! -f "$LOG_DIR/app.log" ]; then
echo "Log file not found: $LOG_DIR/app.log"
return 1
fi
echo "Recent errors (last 50 lines with ERROR):"
grep "ERROR" $LOG_DIR/app.log | tail -50
echo ""
echo "Error summary:"
echo "Total errors: $(grep -c "ERROR" $LOG_DIR/app.log)"
echo "Errors today: $(grep "ERROR" $LOG_DIR/app.log | grep "$(date +%Y-%m-%d)" | wc -l)"
}
# 함수: 성능 통계
stats() {
echo "Performance Statistics"
echo "===================="
if [ -f "$LOG_DIR/resource-monitor.csv" ]; then
echo "Recent resource usage:"
tail -5 $LOG_DIR/resource-monitor.csv | column -t -s,
fi
echo ""
echo "Batch job statistics:"
if command -v psql >/dev/null 2>&1; then
psql -h localhost -U mda -d mdadb -c "
SELECT
job_name,
COUNT(*) as executions,
AVG(EXTRACT(EPOCH FROM (end_time - start_time))/60)::numeric(10,2) as avg_duration_min,
MAX(end_time) as last_execution
FROM batch_job_execution je
JOIN batch_job_instance ji ON je.job_instance_id = ji.job_instance_id
WHERE end_time > CURRENT_DATE - INTERVAL '7 days'
GROUP BY job_name;" 2>/dev/null || echo "Unable to query batch statistics"
fi
}
# 메인 로직
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
status)
status
;;
logs)
logs
;;
errors)
errors
;;
stats)
stats
;;
*)
echo "Usage: $0 {start|stop|restart|status|logs|errors|stats}"
echo ""
echo "Commands:"
echo " start - Start the Vessel Batch application"
echo " stop - Stop the Vessel Batch application"
echo " restart - Restart the Vessel Batch application"
echo " status - Check application status and health"
echo " logs - Tail application logs"
echo " errors - Show recent errors from logs"
echo " stats - Show performance statistics"
exit 1
;;
esac
exit $?

파일 보기

@ -0,0 +1,191 @@
#!/bin/bash
# Query DB 서버에서 최적화된 실행 스크립트 (PROD 프로파일)
# Rocky Linux 환경에 맞춰 조정됨
# Java 17 경로 명시적 지정
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 로그 디렉토리
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
echo "================================================"
echo "Vessel Batch Aggregation - PROD Profile"
echo "Start Time: $(date)"
echo "================================================"
# 경로 확인
echo "Environment Check:"
echo "- App Home: $APP_HOME"
echo "- JAR File: $JAR_FILE"
echo "- Java Path: $JAVA_BIN"
echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
# JAR 파일 존재 확인
if [ ! -f "$JAR_FILE" ]; then
echo "ERROR: JAR file not found at $JAR_FILE"
exit 1
fi
# Java 실행 파일 확인
if [ ! -x "$JAVA_BIN" ]; then
echo "ERROR: Java not found or not executable at $JAVA_BIN"
exit 1
fi
# 서버 정보 확인
echo ""
echo "Server Info:"
echo "- Hostname: $(hostname)"
echo "- CPU Cores: $(nproc)"
echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
# 환경 변수 설정 (PROD 프로파일)
export SPRING_PROFILES_ACTIVE=prod
# Query DB와 Batch Meta DB를 localhost로 오버라이드
export SPRING_DATASOURCE_QUERY_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=signal&options=-csearch_path=signal,public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
export SPRING_DATASOURCE_BATCH_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
# 서버 CPU 코어 수에 따른 병렬 처리 조정
CPU_CORES=$(nproc)
export VESSEL_BATCH_PARTITION_SIZE=$((CPU_CORES * 2))
export VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS=$((CPU_CORES / 2))
echo ""
echo "Optimized Settings:"
echo "- Active Profile: PROD"
echo "- Partition Size: $VESSEL_BATCH_PARTITION_SIZE"
echo "- Parallel Threads: $VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS"
echo "- Query DB: localhost (optimized)"
echo "- Batch Meta DB: localhost (optimized)"
# JVM 옵션 (서버 메모리에 맞게 조정)
TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
JVM_HEAP=$((TOTAL_MEM / 8)) # 전체 메모리의 25% 사용
# 최소 16GB, 최대 32GB로 제한
if [ $JVM_HEAP -lt 8 ]; then
JVM_HEAP=8
elif [ $JVM_HEAP -gt 16 ]; then
JVM_HEAP=16
fi
JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-XX:+UseStringDeduplication \
-XX:+ParallelRefProcEnabled \
-XX:ParallelGCThreads=$((CPU_CORES / 2)) \
-XX:ConcGCThreads=$((CPU_CORES / 4)) \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
-Dfile.encoding=UTF-8 \
-Duser.timezone=Asia/Seoul \
-Djava.security.egd=file:/dev/./urandom \
-Dspring.profiles.active=prod"
echo "- JVM Heap Size: ${JVM_HEAP}GB"
# 기존 프로세스 확인 및 종료
echo ""
echo "Checking for existing process..."
PID=$(pgrep -f "$JAR_FILE")
if [ ! -z "$PID" ]; then
echo "Stopping existing process (PID: $PID)..."
kill -15 $PID
# 프로세스 종료 대기 (최대 30초)
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo "Process stopped successfully."
break
fi
if [ $i -eq 30 ]; then
echo "Force killing process..."
kill -9 $PID
fi
sleep 1
done
fi
# 작업 디렉토리로 이동
cd $APP_HOME
# 애플리케이션 실행 (nice로 우선순위 조정)
echo ""
echo "Starting application with PROD profile..."
echo "Command: nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
echo ""
# nohup으로 백그라운드 실행
nohup nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
> $LOG_DIR/app.log 2>&1 &
NEW_PID=$!
echo "Application started with PID: $NEW_PID"
# PID 파일 생성
echo $NEW_PID > $APP_HOME/vessel-batch.pid
# 시작 확인 (30초 대기)
echo "Waiting for application startup..."
STARTUP_SUCCESS=false
for i in {1..30}; do
if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
echo "✅ Application started successfully!"
STARTUP_SUCCESS=true
break
fi
echo -n "."
sleep 1
done
if [ "$STARTUP_SUCCESS" = false ]; then
echo ""
echo "⚠️ Application startup timeout. Check logs for errors."
echo "Log file: $LOG_DIR/app.log"
tail -20 $LOG_DIR/app.log
fi
echo ""
echo "================================================"
echo "Deployment Complete!"
echo "- Profile: PROD"
echo "- PID: $NEW_PID"
echo "- PID File: $APP_HOME/vessel-batch.pid"
echo "- Log: $LOG_DIR/app.log"
echo "- Monitor: tail -f $LOG_DIR/app.log"
echo "================================================"
# 초기 상태 확인
sleep 5
echo ""
echo "Initial Status Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
# 활성 프로파일 확인
echo ""
echo "Active Profile Check:"
curl -s http://localhost:8090/actuator/env | grep -A 5 "activeProfiles" 2>/dev/null || echo "Env endpoint not yet available"
# 리소스 사용량 표시
echo ""
echo "Resource Usage:"
ps aux | grep $NEW_PID | grep -v grep
# 빠른 명령어 안내
echo ""
echo "Useful Commands:"
echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-batch.pid)"
echo "- Logs: tail -f $LOG_DIR/app.log"
echo "- Status: curl http://localhost:8090/actuator/health"
echo "- Monitor: $APP_HOME/monitor-query-server.sh"

파일 보기

@ -0,0 +1,175 @@
#!/usr/bin/env python3
"""
WebSocket 부하 테스트 자동화 스크립트
"""
import asyncio
import json
import time
import statistics
from datetime import datetime, timedelta
import websockets
import stomper
from concurrent.futures import ThreadPoolExecutor
class WebSocketLoadTest:
def __init__(self, base_url="ws://10.26.252.48:8090/ws-tracks"):
self.base_url = base_url
self.results = []
self.active_connections = 0
async def single_client_test(self, client_id, duration_seconds=60):
"""단일 클라이언트 테스트"""
start_time = time.time()
messages_received = 0
bytes_received = 0
errors = 0
try:
async with websockets.connect(self.base_url) as websocket:
self.active_connections += 1
print(f"Client {client_id}: Connected")
# STOMP CONNECT
connect_frame = stomper.connect(host='/', accept_version='1.2')
await websocket.send(connect_frame)
# Subscribe to data channel
sub_frame = stomper.subscribe('/user/queue/tracks/data', client_id)
await websocket.send(sub_frame)
# Send query request
query_request = {
"startTime": (datetime.now() - timedelta(days=1)).isoformat(),
"endTime": datetime.now().isoformat(),
"viewport": {
"minLon": 124.0,
"maxLon": 132.0,
"minLat": 33.0,
"maxLat": 38.0
},
"filters": {
"minDistance": 10,
"minSpeed": 5
},
"chunkSize": 2000
}
send_frame = stomper.send('/app/tracks/query', json.dumps(query_request))
await websocket.send(send_frame)
# Receive messages
while time.time() - start_time < duration_seconds:
try:
message = await asyncio.wait_for(websocket.recv(), timeout=1.0)
messages_received += 1
bytes_received += len(message)
# Parse STOMP frame
frame = stomper.Frame()
frame.parse(message)
if frame.cmd == 'MESSAGE':
data = json.loads(frame.body)
if data.get('type') == 'complete':
print(f"Client {client_id}: Query completed")
break
except asyncio.TimeoutError:
continue
except Exception as e:
errors += 1
print(f"Client {client_id}: Error - {e}")
except Exception as e:
errors += 1
print(f"Client {client_id}: Connection error - {e}")
finally:
self.active_connections -= 1
# Calculate results
elapsed_time = time.time() - start_time
result = {
'client_id': client_id,
'duration': elapsed_time,
'messages': messages_received,
'bytes': bytes_received,
'errors': errors,
'msg_per_sec': messages_received / elapsed_time if elapsed_time > 0 else 0,
'mbps': (bytes_received / 1024 / 1024) / elapsed_time if elapsed_time > 0 else 0
}
self.results.append(result)
return result
async def run_load_test(self, num_clients=10, duration=60):
"""병렬 부하 테스트 실행"""
print(f"Starting load test with {num_clients} clients for {duration} seconds...")
tasks = []
for i in range(num_clients):
task = asyncio.create_task(self.single_client_test(i, duration))
tasks.append(task)
await asyncio.sleep(0.1) # Stagger connections
# Wait for all clients to complete
await asyncio.gather(*tasks)
# Print summary
self.print_summary()
def print_summary(self):
"""테스트 결과 요약 출력"""
print("\n" + "="*60)
print("LOAD TEST SUMMARY")
print("="*60)
total_messages = sum(r['messages'] for r in self.results)
total_bytes = sum(r['bytes'] for r in self.results)
total_errors = sum(r['errors'] for r in self.results)
avg_msg_per_sec = statistics.mean(r['msg_per_sec'] for r in self.results)
avg_mbps = statistics.mean(r['mbps'] for r in self.results)
print(f"Total Clients: {len(self.results)}")
print(f"Total Messages: {total_messages:,}")
print(f"Total Data: {total_bytes/1024/1024:.2f} MB")
print(f"Total Errors: {total_errors}")
print(f"Avg Messages/sec per client: {avg_msg_per_sec:.2f}")
print(f"Avg Throughput per client: {avg_mbps:.2f} MB/s")
print(f"Total Throughput: {avg_mbps * len(self.results):.2f} MB/s")
# Error rate
error_rate = (total_errors / len(self.results)) * 100 if self.results else 0
print(f"Error Rate: {error_rate:.2f}%")
# Success rate
successful_clients = sum(1 for r in self.results if r['errors'] == 0)
success_rate = (successful_clients / len(self.results)) * 100 if self.results else 0
print(f"Success Rate: {success_rate:.2f}%")
print("="*60)
async def main():
# Test scenarios
scenarios = [
{"clients": 10, "duration": 60, "name": "Light Load"},
{"clients": 50, "duration": 120, "name": "Medium Load"},
{"clients": 100, "duration": 180, "name": "Heavy Load"}
]
for scenario in scenarios:
print(f"\n{'='*60}")
print(f"Running scenario: {scenario['name']}")
print(f"{'='*60}")
tester = WebSocketLoadTest()
await tester.run_load_test(
num_clients=scenario['clients'],
duration=scenario['duration']
)
# Wait between scenarios
print(f"\nWaiting 30 seconds before next scenario...")
await asyncio.sleep(30)
if __name__ == "__main__":
asyncio.run(main())

파일 보기

@ -0,0 +1,584 @@
-- ============================================================
-- gc-signal-batch V2: SNP API 기반 스키마 (신규 생성)
-- 타겟 DB: snpdb (211.208.115.83), 스키마: signal
--
-- 핵심 변경:
-- sig_src_cd + target_id → mmsi VARCHAR(20) 단일 식별자
-- t_vessel_latest_position → t_ais_position (새 구조)
-- 신규: t_vessel_static (정적 정보 이력)
--
-- 실행 전 확인:
-- 1. PostGIS 확장이 설치되어 있는지 확인
-- 2. signal 스키마가 존재하는지 확인
-- 3. 파티션 테이블은 PartitionManager가 런타임에 자동 생성
-- ============================================================
-- 스키마 생성
CREATE SCHEMA IF NOT EXISTS signal;
-- PostGIS 확장 활성화
CREATE EXTENSION IF NOT EXISTS postgis;
-- ============================================================
-- 1. AIS 위치/정적 정보 (SNP API 전용, 신규)
-- ============================================================
-- t_ais_position: AIS 최신 위치 (MMSI별 1건 UPSERT)
-- 용도: 캐시 복원, 타 프로세스 최신 위치 조회, API 불가 환경 대응
-- 갱신: 5분 집계 Job에서 캐시 스냅샷 UPSERT
CREATE TABLE IF NOT EXISTS signal.t_ais_position (
mmsi VARCHAR(20) PRIMARY KEY,
imo BIGINT,
name VARCHAR(50),
callsign VARCHAR(20),
vessel_type VARCHAR(50),
extra_info VARCHAR(200),
lat DOUBLE PRECISION NOT NULL,
lon DOUBLE PRECISION NOT NULL,
geom GEOMETRY(POINT, 4326),
heading DOUBLE PRECISION,
sog DOUBLE PRECISION,
cog DOUBLE PRECISION,
rot INTEGER,
length INTEGER,
width INTEGER,
draught DOUBLE PRECISION,
destination VARCHAR(200),
eta TIMESTAMPTZ,
status VARCHAR(50),
message_timestamp TIMESTAMPTZ NOT NULL,
signal_kind_code VARCHAR(10),
class_type VARCHAR(1),
last_update TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_ais_position_geom ON signal.t_ais_position USING GIST (geom);
CREATE INDEX IF NOT EXISTS idx_ais_position_signal_kind ON signal.t_ais_position (signal_kind_code);
CREATE INDEX IF NOT EXISTS idx_ais_position_timestamp ON signal.t_ais_position (message_timestamp);
COMMENT ON TABLE signal.t_ais_position IS 'AIS 최신 위치 (MMSI별 1건, 5분 집계 Job에서 UPSERT)';
COMMENT ON COLUMN signal.t_ais_position.mmsi IS 'MMSI (VARCHAR — 문자 혼합 MMSI 장비 지원)';
COMMENT ON COLUMN signal.t_ais_position.signal_kind_code IS 'MDA 범례코드 (SignalKindCode.resolve 결과)';
-- t_vessel_static: 정적 정보 이력 (위변조/흘수 변경 추적)
-- 전략: COALESCE + CDC 하이브리드 (HourlyJob에서 저장)
-- 보존: 90일
CREATE TABLE IF NOT EXISTS signal.t_vessel_static (
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMPTZ NOT NULL,
imo BIGINT,
name VARCHAR(50),
callsign VARCHAR(20),
vessel_type VARCHAR(50),
extra_info VARCHAR(200),
length INTEGER,
width INTEGER,
draught DOUBLE PRECISION,
destination VARCHAR(200),
eta TIMESTAMPTZ,
status VARCHAR(50),
signal_kind_code VARCHAR(10),
class_type VARCHAR(1),
PRIMARY KEY (mmsi, time_bucket)
);
CREATE INDEX IF NOT EXISTS idx_vessel_static_mmsi ON signal.t_vessel_static (mmsi);
COMMENT ON TABLE signal.t_vessel_static IS '선박 정적 정보 이력 (시간별, COALESCE+CDC). 보존 90일';
-- ============================================================
-- 2. 핵심 항적 테이블 (5분/시간/일별 — 파티션)
-- ============================================================
-- t_vessel_tracks_5min: 5분 단위 항적 (일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_5min (
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_vessel_tracks_5min_pkey PRIMARY KEY (mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_5min_mmsi ON signal.t_vessel_tracks_5min (mmsi);
CREATE INDEX IF NOT EXISTS idx_tracks_5min_bucket ON signal.t_vessel_tracks_5min (time_bucket);
COMMENT ON TABLE signal.t_vessel_tracks_5min IS '선박 항적 5분 단위 집계';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.mmsi IS 'MMSI (VARCHAR)';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.track_geom IS 'LineStringM 형식 항적 (M값은 첫 포인트 기준 상대시간 초)';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.start_position IS '시작 위치 JSON {lat, lon, time, sog}';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.end_position IS '종료 위치 JSON {lat, lon, time, sog}';
-- t_vessel_tracks_hourly: 시간별 항적 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_hourly (
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_vessel_tracks_hourly_pkey PRIMARY KEY (mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_hourly_mmsi ON signal.t_vessel_tracks_hourly (mmsi);
CREATE INDEX IF NOT EXISTS idx_tracks_hourly_bucket ON signal.t_vessel_tracks_hourly (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_hourly_geom ON signal.t_vessel_tracks_hourly USING GIST (track_geom);
COMMENT ON TABLE signal.t_vessel_tracks_hourly IS '선박 항적 시간별 집계';
-- t_vessel_tracks_daily: 일별 항적 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_daily (
mmsi VARCHAR(20) NOT NULL,
time_bucket DATE NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
operating_hours NUMERIC(4,2),
port_visits JSONB,
start_position JSONB,
end_position JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_vessel_tracks_daily_pkey PRIMARY KEY (mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_daily_mmsi ON signal.t_vessel_tracks_daily (mmsi);
CREATE INDEX IF NOT EXISTS idx_tracks_daily_bucket ON signal.t_vessel_tracks_daily (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_daily_geom ON signal.t_vessel_tracks_daily USING GIST (track_geom);
COMMENT ON TABLE signal.t_vessel_tracks_daily IS '선박 항적 일별 집계';
-- ============================================================
-- 3. 해구(Grid) 관련 테이블 — 파티션
-- ============================================================
-- t_haegu_definitions: 대해구 정의 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_haegu_definitions (
haegu_no INTEGER NOT NULL,
min_lat DOUBLE PRECISION NOT NULL,
min_lon DOUBLE PRECISION NOT NULL,
max_lat DOUBLE PRECISION NOT NULL,
max_lon DOUBLE PRECISION NOT NULL,
center_lat DOUBLE PRECISION NOT NULL,
center_lon DOUBLE PRECISION NOT NULL,
geom GEOMETRY(MULTIPOLYGON, 4326) NOT NULL,
center_point GEOMETRY(POINT, 4326) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_haegu_definitions_pkey PRIMARY KEY (haegu_no)
);
CREATE INDEX IF NOT EXISTS idx_haegu_definitions_geom ON signal.t_haegu_definitions USING GIST (geom);
COMMENT ON TABLE signal.t_haegu_definitions IS '대해구 정의 정보';
-- t_grid_tiles: 그리드 타일 정의 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_grid_tiles (
tile_id VARCHAR(50) NOT NULL,
tile_level INTEGER NOT NULL,
haegu_no INTEGER NOT NULL,
sohaegu_no INTEGER,
min_lat DOUBLE PRECISION NOT NULL,
min_lon DOUBLE PRECISION NOT NULL,
max_lat DOUBLE PRECISION NOT NULL,
max_lon DOUBLE PRECISION NOT NULL,
tile_geom GEOMETRY(POLYGON, 4326) NOT NULL,
center_point GEOMETRY(POINT, 4326) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tiles_pkey PRIMARY KEY (tile_id)
);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_tile_geom ON signal.t_grid_tiles USING GIST (tile_geom);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_haegu ON signal.t_grid_tiles (haegu_no);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_level ON signal.t_grid_tiles (tile_level);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_haegu_sohaegu ON signal.t_grid_tiles (haegu_no, sohaegu_no);
COMMENT ON TABLE signal.t_grid_tiles IS '그리드 타일 정의 (대해구/소해구)';
-- t_grid_vessel_tracks: 해구별 선박 항적 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_vessel_tracks (
haegu_no INTEGER NOT NULL,
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
point_count INTEGER,
entry_time TIMESTAMP,
exit_time TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_vessel_tracks_pkey PRIMARY KEY (haegu_no, mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_grid_vessel_tracks_mmsi_time ON signal.t_grid_vessel_tracks (mmsi, time_bucket DESC);
CREATE INDEX IF NOT EXISTS idx_grid_vessel_tracks_haegu_time ON signal.t_grid_vessel_tracks (haegu_no, time_bucket DESC);
COMMENT ON TABLE signal.t_grid_vessel_tracks IS '해구별 선박 항적 (5분 단위)';
-- t_grid_tracks_summary: 해구별 항적 요약 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary (
haegu_no INTEGER NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
traffic_density NUMERIC(10,4),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tracks_summary_pkey PRIMARY KEY (haegu_no, time_bucket)
) PARTITION BY RANGE (time_bucket);
COMMENT ON TABLE signal.t_grid_tracks_summary IS '해구별 5분 단위 항적 요약 통계';
COMMENT ON COLUMN signal.t_grid_tracks_summary.vessel_list IS '선박별 상세 정보 [{mmsi, distance_nm, avg_speed}]';
-- t_grid_tracks_summary_hourly: 해구별 시간별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary_hourly (
haegu_no INTEGER NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tracks_summary_hourly_pkey PRIMARY KEY (haegu_no, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_grid_tracks_summary_hourly_time ON signal.t_grid_tracks_summary_hourly (time_bucket DESC, haegu_no);
COMMENT ON TABLE signal.t_grid_tracks_summary_hourly IS '해구별 시간별 항적 요약 통계';
-- t_grid_tracks_summary_daily: 해구별 일별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary_daily (
haegu_no INTEGER NOT NULL,
time_bucket DATE NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tracks_summary_daily_pkey PRIMARY KEY (haegu_no, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_grid_tracks_summary_daily_time ON signal.t_grid_tracks_summary_daily (time_bucket DESC, haegu_no);
COMMENT ON TABLE signal.t_grid_tracks_summary_daily IS '해구별 일일 항적 요약 통계';
-- ============================================================
-- 4. 영역(Area) 관련 테이블 — 파티션
-- ============================================================
-- t_areas: 사용자 정의 영역 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_areas (
area_id VARCHAR(50) NOT NULL,
area_name VARCHAR(100) NOT NULL,
area_type VARCHAR(20) NOT NULL,
area_geom GEOMETRY(MULTIPOLYGON, 4326) NOT NULL,
properties JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_areas_pkey PRIMARY KEY (area_id)
);
CREATE INDEX IF NOT EXISTS idx_t_areas_area_geom ON signal.t_areas USING GIST (area_geom);
COMMENT ON TABLE signal.t_areas IS '사용자 정의 영역 정보';
-- t_area_vessel_tracks: 영역별 선박 항적 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_vessel_tracks (
area_id VARCHAR(50) NOT NULL,
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
point_count INTEGER,
metrics JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_vessel_tracks_pkey PRIMARY KEY (area_id, mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_vessel_tracks_mmsi_time ON signal.t_area_vessel_tracks (mmsi, time_bucket DESC);
CREATE INDEX IF NOT EXISTS idx_area_vessel_tracks_area_time ON signal.t_area_vessel_tracks (area_id, time_bucket DESC);
COMMENT ON TABLE signal.t_area_vessel_tracks IS '영역별 선박 항적 (5분 단위)';
-- t_area_tracks_summary: 영역별 항적 요약 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary (
area_id VARCHAR(50) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
metrics_summary JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_tracks_summary_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
COMMENT ON TABLE signal.t_area_tracks_summary IS '영역별 5분 단위 항적 요약 통계';
COMMENT ON COLUMN signal.t_area_tracks_summary.vessel_list IS '선박별 상세 정보 [{mmsi, distance_nm, avg_speed}]';
-- t_area_tracks_summary_hourly: 영역별 시간별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary_hourly (
area_id VARCHAR(50) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
metrics_summary JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_tracks_summary_hourly_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_tracks_summary_hourly_time ON signal.t_area_tracks_summary_hourly (time_bucket DESC, area_id);
COMMENT ON TABLE signal.t_area_tracks_summary_hourly IS '영역별 시간별 항적 요약 통계';
-- t_area_tracks_summary_daily: 영역별 일별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary_daily (
area_id VARCHAR(50) NOT NULL,
time_bucket DATE NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
metrics_summary JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_tracks_summary_daily_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_tracks_summary_daily_time ON signal.t_area_tracks_summary_daily (time_bucket DESC, area_id);
COMMENT ON TABLE signal.t_area_tracks_summary_daily IS '영역별 일일 항적 요약 통계';
-- t_area_statistics: 영역별 선박 통계 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_statistics (
area_id VARCHAR(50) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
vessel_count INTEGER DEFAULT 0,
in_count INTEGER DEFAULT 0,
out_count INTEGER DEFAULT 0,
transit_vessels JSONB,
stationary_vessels JSONB,
avg_sog NUMERIC(25,1),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_statistics_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_stats_lookup ON signal.t_area_statistics (area_id, time_bucket DESC);
COMMENT ON TABLE signal.t_area_statistics IS '영역별 5분 단위 선박 통계';
-- ============================================================
-- 5. 비정상 항적 테이블 — 파티션
-- ============================================================
-- t_abnormal_tracks: 비정상 항적 (월별 파티션)
-- id는 GENERATED ALWAYS로 자동 생성
CREATE TABLE IF NOT EXISTS signal.t_abnormal_tracks (
id BIGINT GENERATED ALWAYS AS IDENTITY,
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
abnormal_type VARCHAR(50) NOT NULL,
abnormal_reason JSONB NOT NULL,
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
source_table VARCHAR(50) NOT NULL,
detected_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT t_abnormal_tracks_pkey PRIMARY KEY (id, time_bucket)
) PARTITION BY RANGE (time_bucket);
-- ON CONFLICT (mmsi, time_bucket, source_table) 지원
CREATE UNIQUE INDEX IF NOT EXISTS abnormal_tracks_uk ON signal.t_abnormal_tracks (mmsi, time_bucket, source_table);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_mmsi ON signal.t_abnormal_tracks (mmsi);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_time ON signal.t_abnormal_tracks (time_bucket);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_type ON signal.t_abnormal_tracks (abnormal_type);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_geom ON signal.t_abnormal_tracks USING GIST (track_geom);
COMMENT ON TABLE signal.t_abnormal_tracks IS '비정상 선박 항적';
COMMENT ON COLUMN signal.t_abnormal_tracks.mmsi IS 'MMSI (VARCHAR)';
COMMENT ON COLUMN signal.t_abnormal_tracks.abnormal_type IS '비정상 유형 (excessive_speed, teleport, impossible_distance, excessive_avg_speed, gap_jump)';
COMMENT ON COLUMN signal.t_abnormal_tracks.source_table IS '검출 원본 테이블 (t_vessel_tracks_5min/hourly/daily)';
-- t_abnormal_track_stats: 비정상 항적 일별 통계 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_abnormal_track_stats (
stat_date DATE NOT NULL,
abnormal_type VARCHAR(50) NOT NULL,
vessel_count INTEGER NOT NULL,
track_count INTEGER NOT NULL,
total_points INTEGER,
avg_deviation NUMERIC(10,2),
max_deviation NUMERIC(10,2),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT t_abnormal_track_stats_pkey PRIMARY KEY (stat_date, abnormal_type)
);
CREATE INDEX IF NOT EXISTS idx_abnormal_track_stats_date ON signal.t_abnormal_track_stats (stat_date);
COMMENT ON TABLE signal.t_abnormal_track_stats IS '비정상 항적 일별 통계';
-- ============================================================
-- 6. 타일 요약 테이블 — 파티션
-- ============================================================
-- t_tile_summary: 타일별 선박 요약 (5분, 일별 파티션)
-- ON CONFLICT (tile_id, time_bucket) 지원을 위해 UNIQUE 추가
CREATE TABLE IF NOT EXISTS signal.t_tile_summary (
tile_id VARCHAR(50) NOT NULL,
tile_level INTEGER NOT NULL,
time_bucket TIMESTAMP NOT NULL,
vessel_count INTEGER DEFAULT 0,
unique_vessels JSONB,
total_points BIGINT DEFAULT 0,
avg_sog NUMERIC(25,1),
max_sog NUMERIC(25,1),
vessel_density NUMERIC(10,6),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
haegu_no INTEGER,
sohaegu_no INTEGER,
CONSTRAINT t_tile_summary_pkey PRIMARY KEY (tile_id, time_bucket, tile_level)
) PARTITION BY RANGE (time_bucket);
-- ConcurrentUpdateManager에서 ON CONFLICT (tile_id, time_bucket) 사용
CREATE UNIQUE INDEX IF NOT EXISTS idx_tile_summary_tile_time_uk ON signal.t_tile_summary (tile_id, time_bucket);
CREATE INDEX IF NOT EXISTS idx_tile_summary_time ON signal.t_tile_summary (time_bucket DESC);
CREATE INDEX IF NOT EXISTS idx_tile_summary_vessel_count ON signal.t_tile_summary (vessel_count DESC);
CREATE INDEX IF NOT EXISTS idx_tile_summary_tile_level ON signal.t_tile_summary (tile_level);
COMMENT ON TABLE signal.t_tile_summary IS '타일별 5분 단위 선박 요약 통계';
COMMENT ON COLUMN signal.t_tile_summary.unique_vessels IS '고유 선박 목록 [{mmsi}]';
-- ============================================================
-- 7. 배치 성능 메트릭 (일반 테이블)
-- ============================================================
CREATE TABLE IF NOT EXISTS signal.t_batch_performance_metrics (
id SERIAL PRIMARY KEY,
job_name VARCHAR(100) NOT NULL,
execution_id BIGINT NOT NULL,
start_time TIMESTAMP NOT NULL,
end_time TIMESTAMP,
duration_seconds BIGINT,
total_read BIGINT,
total_write BIGINT,
throughput_per_sec NUMERIC(10,2),
status VARCHAR(20),
error_message TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_batch_metrics_job ON signal.t_batch_performance_metrics (job_name, start_time DESC);
CREATE INDEX IF NOT EXISTS idx_batch_metrics_status ON signal.t_batch_performance_metrics (status) WHERE status != 'COMPLETED';
COMMENT ON TABLE signal.t_batch_performance_metrics IS '배치 작업 성능 메트릭';
-- ============================================================
-- 8. 초기 파티션 생성 (수동 실행용)
-- PartitionManager가 런타임에 자동 생성하지만,
-- 최초 배포 시 수동으로 미리 생성할 수 있음.
-- ============================================================
-- 일별 파티션 생성 함수
CREATE OR REPLACE FUNCTION signal.create_daily_partition(
parent_table TEXT,
target_date DATE
) RETURNS VOID AS $$
DECLARE
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
partition_name := parent_table || '_' || to_char(target_date, 'YYMMDD');
start_date := target_date;
end_date := target_date + INTERVAL '1 day';
EXECUTE format(
'CREATE TABLE IF NOT EXISTS signal.%I PARTITION OF signal.%I FOR VALUES FROM (%L) TO (%L)',
partition_name, parent_table, start_date, end_date
);
END;
$$ LANGUAGE plpgsql;
-- 월별 파티션 생성 함수
CREATE OR REPLACE FUNCTION signal.create_monthly_partition(
parent_table TEXT,
target_date DATE
) RETURNS VOID AS $$
DECLARE
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
partition_name := parent_table || '_' || to_char(target_date, 'YYYY_MM');
start_date := date_trunc('month', target_date);
end_date := date_trunc('month', target_date) + INTERVAL '1 month';
EXECUTE format(
'CREATE TABLE IF NOT EXISTS signal.%I PARTITION OF signal.%I FOR VALUES FROM (%L) TO (%L)',
partition_name, parent_table, start_date, end_date
);
END;
$$ LANGUAGE plpgsql;
-- 현재 월 + 다음 달 파티션 일괄 생성
DO $$
DECLARE
today DATE := CURRENT_DATE;
day_offset INTEGER;
daily_tables TEXT[] := ARRAY[
't_vessel_tracks_5min',
't_grid_vessel_tracks',
't_grid_tracks_summary',
't_area_vessel_tracks',
't_area_tracks_summary',
't_tile_summary',
't_area_statistics'
];
monthly_tables TEXT[] := ARRAY[
't_vessel_tracks_hourly',
't_vessel_tracks_daily',
't_grid_tracks_summary_hourly',
't_grid_tracks_summary_daily',
't_area_tracks_summary_hourly',
't_area_tracks_summary_daily',
't_abnormal_tracks'
];
tbl TEXT;
BEGIN
-- 일별 파티션: 오늘부터 7일간
FOREACH tbl IN ARRAY daily_tables LOOP
FOR day_offset IN 0..6 LOOP
PERFORM signal.create_daily_partition(tbl, today + day_offset);
END LOOP;
END LOOP;
-- 월별 파티션: 이번 달 + 다음 달
FOREACH tbl IN ARRAY monthly_tables LOOP
PERFORM signal.create_monthly_partition(tbl, today);
PERFORM signal.create_monthly_partition(tbl, (today + INTERVAL '1 month')::DATE);
END LOOP;
RAISE NOTICE 'Initial partitions created successfully';
END;
$$;
-- ============================================================
-- 9. ANALYZE (통계 수집)
-- ============================================================
ANALYZE signal.t_ais_position;
ANALYZE signal.t_haegu_definitions;
ANALYZE signal.t_grid_tiles;
ANALYZE signal.t_areas;
ANALYZE signal.t_abnormal_track_stats;
ANALYZE signal.t_batch_performance_metrics;

파일 보기

@ -0,0 +1,68 @@
-- Unix timestamp 변환 함수
CREATE OR REPLACE FUNCTION signal.convert_to_unix_timestamp(
geom geometry,
base_time timestamp without time zone
) RETURNS geometry AS $$
DECLARE
wkt_text text;
points text[];
point_text text;
coords text[];
result_wkt text;
unix_base bigint;
relative_seconds bigint;
unix_time bigint;
i integer;
BEGIN
IF geom IS NULL THEN
RETURN NULL;
END IF;
-- Unix timestamp 기준값
unix_base := EXTRACT(EPOCH FROM base_time AT TIME ZONE 'Asia/Seoul')::bigint;
-- WKT 텍스트 추출
wkt_text := ST_AsText(geom);
-- LINESTRING M(...) 에서 좌표 부분만 추출
wkt_text := substring(wkt_text from 'LINESTRING M\((.*)\)');
-- 각 포인트를 배열로 분리
points := string_to_array(wkt_text, ', ');
-- 결과 WKT 시작
result_wkt := 'LINESTRING M(';
-- 각 포인트 처리
FOR i IN 1..array_length(points, 1) LOOP
-- 좌표를 공백으로 분리 (lon lat m)
coords := string_to_array(points[i], ' ');
-- M값(상대시간 초) 추출 및 Unix timestamp로 변환
relative_seconds := coords[3]::bigint;
unix_time := unix_base + relative_seconds;
-- 결과에 추가
IF i > 1 THEN
result_wkt := result_wkt || ', ';
END IF;
result_wkt := result_wkt || coords[1] || ' ' || coords[2] || ' ' || unix_time;
END LOOP;
result_wkt := result_wkt || ')';
-- geometry 타입으로 변환하여 반환
RETURN ST_GeomFromText(result_wkt, 4326);
END;
$$ LANGUAGE plpgsql IMMUTABLE PARALLEL SAFE;
-- 함수 테스트
SELECT
sig_src_cd,
target_id,
time_bucket,
ST_AsText(track_geom) as original,
ST_AsText(signal.convert_to_unix_timestamp(track_geom, time_bucket)) as converted
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
LIMIT 1;

42
sql/simple_update_v2.sql Normal file
파일 보기

@ -0,0 +1,42 @@
-- hourly 테이블 직접 UPDATE (함수 없이)
UPDATE signal.t_vessel_tracks_hourly AS h
SET track_geom_v2 = ST_GeomFromText(
REPLACE(
REPLACE(ST_AsText(track_geom), 'LINESTRING M(',
'LINESTRING M(' ||
CASE
WHEN ST_M(ST_PointN(track_geom, 1)) = 0
THEN EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::text
ELSE (EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::bigint + ST_M(ST_PointN(track_geom, 1)))::text
END || ' '
),
')',
EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::text || ')'
),
4326
)
WHERE time_bucket = '2025-08-07 14:00:00'
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- daily 테이블 직접 UPDATE
UPDATE signal.t_vessel_tracks_daily AS d
SET track_geom_v2 = track_geom -- 임시로 복사 (정확한 변환은 나중에)
WHERE time_bucket = DATE_TRUNC('day', NOW())
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 결과 확인
SELECT
'hourly' as table_type,
COUNT(*) as total,
COUNT(track_geom_v2) as v2_filled
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket = '2025-08-07 14:00:00'
UNION ALL
SELECT
'daily' as table_type,
COUNT(*) as total,
COUNT(track_geom_v2) as v2_filled
FROM signal.t_vessel_tracks_daily
WHERE time_bucket = DATE_TRUNC('day', NOW());

40
sql/update_missing_v2.sql Normal file
파일 보기

@ -0,0 +1,40 @@
-- Unix timestamp 변환을 위한 간단한 UPDATE 쿼리
-- 5분 집계 테이블
UPDATE signal.t_vessel_tracks_5min
SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
WHERE time_bucket >= NOW() - INTERVAL '2 hours'
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 1시간 집계 테이블 (오후 2시 데이터)
UPDATE signal.t_vessel_tracks_hourly
SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
WHERE time_bucket = '2025-08-07 14:00:00'
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 일별 집계 테이블 (오늘 데이터)
UPDATE signal.t_vessel_tracks_daily
SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
WHERE time_bucket = DATE_TRUNC('day', NOW())
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 결과 확인
SELECT
'hourly' as table_type,
COUNT(*) as total_records,
COUNT(track_geom) as v1_count,
COUNT(track_geom_v2) as v2_count
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket = '2025-08-07 14:00:00'
UNION ALL
SELECT
'daily' as table_type,
COUNT(*) as total_records,
COUNT(track_geom) as v1_count,
COUNT(track_geom_v2) as v2_count
FROM signal.t_vessel_tracks_daily
WHERE time_bucket = DATE_TRUNC('day', NOW());

파일 보기

@ -28,8 +28,8 @@ public class BatchCommandLineRunner implements CommandLineRunner {
private JobLauncher jobLauncher;
@Autowired
@Qualifier("vesselAggregationJob")
private Job vesselAggregationJob;
@Qualifier("vesselTrackAggregationJob")
private Job vesselTrackAggregationJob;
private final BatchUtils batchUtils;
@ -48,7 +48,7 @@ public class BatchCommandLineRunner implements CommandLineRunner {
log.info("Running batch job from {} to {}", startTime, endTime);
JobParameters params = batchUtils.createJobParameters(startTime, endTime);
JobExecution execution = jobLauncher.run(vesselAggregationJob, params);
JobExecution execution = jobLauncher.run(vesselTrackAggregationJob, params);
log.info("Batch job completed: {}", execution.getStatus());
} else {

파일 보기

@ -0,0 +1,144 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.transaction.PlatformTransactionManager;
import javax.sql.DataSource;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
/**
* 5분 집계 Job 편승: 캐시 스냅샷 t_ais_position UPSERT
*
* 용도:
* - 서비스 재시작 캐시 복원 (ChnPrmShipCacheWarmer )
* - 캐시 접근 불가 프로세스의 최신 위치 조회
* - API 연결 불가 환경 대응
*/
@Slf4j
@Configuration
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class AisPositionSyncStepConfig {
private final JobRepository jobRepository;
private final DataSource queryDataSource;
private final PlatformTransactionManager transactionManager;
private final AisTargetCacheManager cacheManager;
public AisPositionSyncStepConfig(
JobRepository jobRepository,
@Qualifier("queryDataSource") DataSource queryDataSource,
@Qualifier("queryTransactionManager") PlatformTransactionManager transactionManager,
AisTargetCacheManager cacheManager) {
this.jobRepository = jobRepository;
this.queryDataSource = queryDataSource;
this.transactionManager = transactionManager;
this.cacheManager = cacheManager;
}
@Bean
public Step aisPositionSyncStep() {
return new StepBuilder("aisPositionSyncStep", jobRepository)
.tasklet((contribution, chunkContext) -> {
Collection<AisTargetEntity> entities = cacheManager.getAllValues();
if (entities.isEmpty()) {
log.debug("캐시에 데이터 없음 — t_ais_position 동기화 스킵");
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
String sql = """
INSERT INTO signal.t_ais_position (
mmsi, imo, name, callsign, vessel_type, extra_info,
lat, lon, geom,
heading, sog, cog, rot,
length, width, draught,
destination, eta, status,
message_timestamp, signal_kind_code, class_type,
last_update
) VALUES (
?, ?, ?, ?, ?, ?,
?, ?, public.ST_SetSRID(public.ST_MakePoint(?, ?), 4326),
?, ?, ?, ?,
?, ?, ?,
?, ?, ?,
?, ?, ?,
NOW()
)
ON CONFLICT (mmsi) DO UPDATE SET
imo = EXCLUDED.imo,
name = EXCLUDED.name,
callsign = EXCLUDED.callsign,
vessel_type = EXCLUDED.vessel_type,
extra_info = EXCLUDED.extra_info,
lat = EXCLUDED.lat,
lon = EXCLUDED.lon,
geom = EXCLUDED.geom,
heading = EXCLUDED.heading,
sog = EXCLUDED.sog,
cog = EXCLUDED.cog,
rot = EXCLUDED.rot,
length = EXCLUDED.length,
width = EXCLUDED.width,
draught = EXCLUDED.draught,
destination = EXCLUDED.destination,
eta = EXCLUDED.eta,
status = EXCLUDED.status,
message_timestamp = EXCLUDED.message_timestamp,
signal_kind_code = EXCLUDED.signal_kind_code,
class_type = EXCLUDED.class_type,
last_update = NOW()
""";
List<Object[]> batchArgs = new ArrayList<>();
for (AisTargetEntity e : entities) {
if (e.getMmsi() == null || e.getLat() == null || e.getLon() == null) {
continue;
}
Timestamp msgTs = e.getMessageTimestamp() != null
? Timestamp.from(e.getMessageTimestamp().toInstant())
: null;
Timestamp etaTs = e.getEta() != null
? Timestamp.from(e.getEta().toInstant())
: null;
batchArgs.add(new Object[] {
e.getMmsi(), e.getImo(), e.getName(), e.getCallsign(),
e.getVesselType(), e.getExtraInfo(),
e.getLat(), e.getLon(),
e.getLon(), e.getLat(), // ST_MakePoint(lon, lat)
e.getHeading(), e.getSog(), e.getCog(), e.getRot(),
e.getLength(), e.getWidth(), e.getDraught(),
e.getDestination(), etaTs, e.getStatus(),
msgTs, e.getSignalKindCode(), e.getClassType()
});
}
if (!batchArgs.isEmpty()) {
int[] results = jdbcTemplate.batchUpdate(sql, batchArgs);
log.info("t_ais_position 동기화 완료: {} 건 UPSERT", results.length);
}
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}, transactionManager)
.build();
}
}

파일 보기

@ -0,0 +1,96 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.processor.AisTargetDataProcessor;
import gc.mda.signal_batch.batch.reader.AisTargetDataReader;
import gc.mda.signal_batch.batch.writer.AisTargetCacheWriter;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetDto;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobExecutionListener;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.job.builder.JobBuilder;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.web.reactive.function.client.WebClient;
/**
* AIS Target Import Job 설정
*
* 1분 실행: S&P AIS API DTO 변환 캐시 저장
* Chunk Size: 50,000 (API 호출에 ~33,000건)
*
* DB 저장 없음 캐시만 업데이트.
* t_ais_position UPSERT는 Phase 3의 5분 집계 Job에서 편승.
*/
@Slf4j
@Configuration
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class AisTargetImportJobConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager transactionManager;
private final AisTargetDataProcessor processor;
private final AisTargetCacheWriter writer;
private final WebClient aisApiWebClient;
@Value("${app.ais-api.since-seconds:60}")
private int sinceSeconds;
@Value("${app.ais-api.chunk-size:50000}")
private int chunkSize;
public AisTargetImportJobConfig(
JobRepository jobRepository,
@Qualifier("batchTransactionManager") PlatformTransactionManager transactionManager,
AisTargetDataProcessor processor,
AisTargetCacheWriter writer,
@Qualifier("aisApiWebClient") WebClient aisApiWebClient) {
this.jobRepository = jobRepository;
this.transactionManager = transactionManager;
this.processor = processor;
this.writer = writer;
this.aisApiWebClient = aisApiWebClient;
}
@Bean(name = "aisTargetImportStep")
public Step aisTargetImportStep() {
return new StepBuilder("aisTargetImportStep", jobRepository)
.<AisTargetDto, AisTargetEntity>chunk(chunkSize, transactionManager)
.reader(new AisTargetDataReader(aisApiWebClient, sinceSeconds))
.processor(processor)
.writer(writer)
.build();
}
@Bean(name = "aisTargetImportJob")
public Job aisTargetImportJob() {
return new JobBuilder("aisTargetImportJob", jobRepository)
.start(aisTargetImportStep())
.listener(new JobExecutionListener() {
@Override
public void beforeJob(JobExecution jobExecution) {
log.info("[aisTargetImportJob] Job 시작");
}
@Override
public void afterJob(JobExecution jobExecution) {
log.info("[aisTargetImportJob] Job 완료 - 상태: {}, 처리: {} 건",
jobExecution.getStatus(),
jobExecution.getStepExecutions().stream()
.mapToLong(se -> se.getWriteCount())
.sum());
}
})
.build();
}
}

파일 보기

@ -1,220 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.batch.processor.AccumulatingAreaProcessor;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.AreaStatistics;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import gc.mda.signal_batch.batch.reader.PartitionedReader;
import gc.mda.signal_batch.batch.reader.VesselDataReader;
import gc.mda.signal_batch.batch.writer.UpsertWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.StepExecutionListener;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemReader;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.task.TaskExecutor;
import org.springframework.transaction.PlatformTransactionManager;
import java.time.LocalDateTime;
import java.util.List;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class AreaStatisticsStepConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager queryTransactionManager;
private final VesselDataReader vesselDataReader;
private final AccumulatingAreaProcessor accumulatingAreaProcessor;
private final AreaStatisticsProcessor areaStatisticsProcessor;
private final UpsertWriter upsertWriter;
private final PartitionedReader partitionedReader;
private final ApplicationContext applicationContext;
@Value("${vessel.batch.area-statistics.chunk-size:1000}")
private int areaChunkSize;
@Value("${vessel.batch.area-statistics.batch-size:500}")
private int areaBatchSize;
@Qualifier("batchTaskExecutor")
private final TaskExecutor batchTaskExecutor;
@Qualifier("partitionTaskExecutor")
private final TaskExecutor partitionTaskExecutor;
@Bean
public Step aggregateAreaStatisticsStep() {
// InMemoryVesselDataReader를 ApplicationContext에서 가져옴
InMemoryVesselDataReader inMemoryReader = applicationContext.getBean(InMemoryVesselDataReader.class);
return new StepBuilder("aggregateAreaStatisticsStep", jobRepository)
.<VesselData, AreaStatistics>chunk(areaChunkSize, queryTransactionManager)
.reader(inMemoryReader) // 메모리 기반 Reader 사용
.processor(accumulatingAreaProcessor)
.writer(items -> {}) // writer, 실제 저장은 listener에서
.listener(areaStatisticsStepListener())
.faultTolerant()
.skipLimit(100)
.skip(Exception.class)
.build();
}
@Bean
public Step partitionedAreaStatisticsStep() {
return new StepBuilder("partitionedAreaStatisticsStep", jobRepository)
.partitioner("areaStatisticsPartitioner", partitionedReader.dayPartitioner(null))
.partitionHandler(areaStatisticsPartitionHandler())
.build();
}
@Bean
public TaskExecutorPartitionHandler areaStatisticsPartitionHandler() {
TaskExecutorPartitionHandler handler = new TaskExecutorPartitionHandler();
handler.setTaskExecutor(partitionTaskExecutor);
handler.setStep(areaStatisticsSlaveStep());
handler.setGridSize(24);
return handler;
}
@Bean
public Step areaStatisticsSlaveStep() {
return new StepBuilder("areaStatisticsSlaveStep", jobRepository)
.<List<VesselData>, List<AreaStatistics>>chunk(50, queryTransactionManager)
.reader(slaveAreaBatchVesselDataReader(null, null, null))
.processor(areaStatisticsProcessor.batchProcessor())
.writer(upsertWriter.areaStatisticsWriter())
.faultTolerant()
.skipLimit(100)
.skip(Exception.class)
.build();
}
@Bean
@StepScope
public ItemReader<VesselData> areaVesselDataReader(
@Value("#{jobParameters['startTime']}") String startTimeStr,
@Value("#{jobParameters['endTime']}") String endTimeStr) {
return new ItemReader<VesselData>() {
private ItemReader<VesselData> delegate;
private boolean initialized = false;
@Override
public VesselData read() throws Exception {
if (!initialized) {
LocalDateTime startTime = startTimeStr != null ? LocalDateTime.parse(startTimeStr) : null;
LocalDateTime endTime = endTimeStr != null ? LocalDateTime.parse(endTimeStr) : null;
// 기존 reader close
if (delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
} catch (Exception e) {
log.debug("Failed to close previous reader: {}", e.getMessage());
}
}
// 최신 위치만 사용
delegate = vesselDataReader.vesselLatestPositionReader(startTime, endTime, null);
((org.springframework.batch.item.ItemStream) delegate).open(
org.springframework.batch.core.scope.context.StepSynchronizationManager
.getContext().getStepExecution().getExecutionContext());
initialized = true;
}
VesselData data = delegate.read();
// Reader 종료 close
if (data == null && delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
delegate = null;
initialized = false;
} catch (Exception e) {
log.debug("Failed to close reader on completion: {}", e.getMessage());
}
}
return data;
}
};
}
@Bean
@StepScope
public ItemReader<List<VesselData>> slaveAreaBatchVesselDataReader(
@Value("#{stepExecutionContext['startTime']}") String startTime,
@Value("#{stepExecutionContext['endTime']}") String endTime,
@Value("#{stepExecutionContext['partition']}") String partition) {
return new ItemReader<List<VesselData>>() {
private ItemReader<VesselData> delegate = vesselDataReader.vesselDataPagingReader(
startTime != null ? LocalDateTime.parse(startTime) : null,
endTime != null ? LocalDateTime.parse(endTime) : null,
partition
);
@Override
public List<VesselData> read() throws Exception {
List<VesselData> batch = new java.util.ArrayList<>();
for (int i = 0; i < areaBatchSize; i++) {
VesselData item = delegate.read();
if (item == null) {
break;
}
batch.add(item);
}
return batch.isEmpty() ? null : batch;
}
};
}
@Bean
public StepExecutionListener areaStatisticsStepListener() {
return new StepExecutionListener() {
@Override
public ExitStatus afterStep(StepExecution stepExecution) {
// 누적된 데이터를 DB에 저장
@SuppressWarnings("unchecked")
List<AreaStatistics> statistics = (List<AreaStatistics>)
stepExecution.getExecutionContext().get("areaStatistics");
if (statistics != null && !statistics.isEmpty()) {
try {
upsertWriter.areaStatisticsWriter().write(
new Chunk<>(List.of(statistics))
);
log.info("Successfully wrote {} area statistics", statistics.size());
} catch (Exception e) {
log.error("Failed to write area statistics", e);
throw new RuntimeException(e);
}
}
return stepExecution.getExitStatus();
}
};
}
}

파일 보기

@ -117,10 +117,10 @@ public class DailyAggregationStepConfig {
LocalDateTime end = LocalDateTime.parse(endTime);
String sql = """
SELECT DISTINCT sig_src_cd, target_id, date_trunc('day', time_bucket) as day_bucket
SELECT DISTINCT mmsi, date_trunc('day', time_bucket) as day_bucket
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket >= ? AND time_bucket < ?
ORDER BY sig_src_cd, target_id, day_bucket
ORDER BY mmsi, day_bucket
""";
return new JdbcCursorItemReaderBuilder<VesselTrack.VesselKey>()
@ -132,8 +132,7 @@ public class DailyAggregationStepConfig {
ps.setTimestamp(2, java.sql.Timestamp.valueOf(end));
})
.rowMapper((rs, rowNum) -> new VesselTrack.VesselKey(
rs.getString("sig_src_cd"),
rs.getString("target_id"),
rs.getString("mmsi"),
rs.getObject("day_bucket", LocalDateTime.class)
))
.build();
@ -226,7 +225,7 @@ public class DailyAggregationStepConfig {
FROM (
SELECT haegu_no, jsonb_array_elements(vessel_list) as vessel_list,
total_distance_nm, avg_speed,
(vessel_list->>'sig_src_cd') || '_' || (vessel_list->>'target_id') as vessel_key
(vessel_list->>'mmsi') as vessel_key
FROM signal.t_grid_tracks_summary_hourly
WHERE haegu_no = ?
AND time_bucket >= ?
@ -313,7 +312,7 @@ public class DailyAggregationStepConfig {
FROM (
SELECT area_id, jsonb_array_elements(vessel_list) as vessel_list,
total_distance_nm, avg_speed,
(vessel_list->>'sig_src_cd') || '_' || (vessel_list->>'target_id') as vessel_key
(vessel_list->>'mmsi') as vessel_key
FROM signal.t_area_tracks_summary_hourly
WHERE area_id = ?
AND time_bucket >= ?

파일 보기

@ -23,6 +23,7 @@ public class HourlyAggregationJobConfig {
private final JobRepository jobRepository;
private final HourlyAggregationStepConfig hourlyAggregationStepConfig;
private final VesselStaticStepConfig vesselStaticStepConfig;
private final JobCompletionListener jobCompletionListener;
@Bean
@ -34,6 +35,7 @@ public class HourlyAggregationJobConfig {
.start(hourlyAggregationStepConfig.mergeHourlyTracksStep())
.next(hourlyAggregationStepConfig.gridHourlySummaryStep())
.next(hourlyAggregationStepConfig.areaHourlySummaryStep())
.next(vesselStaticStepConfig.vesselStaticSyncStep())
.build();
}

파일 보기

@ -117,10 +117,10 @@ public class HourlyAggregationStepConfig {
LocalDateTime end = LocalDateTime.parse(endTime);
String sql = """
SELECT DISTINCT sig_src_cd, target_id, date_trunc('hour', time_bucket) as hour_bucket
SELECT DISTINCT mmsi, date_trunc('hour', time_bucket) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= ? AND time_bucket < ?
ORDER BY sig_src_cd, target_id, hour_bucket
ORDER BY mmsi, hour_bucket
""";
return new JdbcCursorItemReaderBuilder<VesselTrack.VesselKey>()
@ -132,8 +132,7 @@ public class HourlyAggregationStepConfig {
ps.setTimestamp(2, java.sql.Timestamp.valueOf(end));
})
.rowMapper((rs, rowNum) -> new VesselTrack.VesselKey(
rs.getString("sig_src_cd"),
rs.getString("target_id"),
rs.getString("mmsi"),
rs.getObject("hour_bucket", LocalDateTime.class)
))
.build();
@ -222,12 +221,11 @@ public class HourlyAggregationStepConfig {
SELECT
haegu_no,
?::timestamp as time_bucket,
COUNT(DISTINCT sig_src_cd || '_' || target_id) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(DISTINCT jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list,
@ -313,12 +311,11 @@ public class HourlyAggregationStepConfig {
SELECT
area_id,
?::timestamp as time_bucket,
COUNT(DISTINCT sig_src_cd || '_' || target_id) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(DISTINCT jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list,

파일 보기

@ -1,178 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.domain.vessel.model.VesselLatestPosition;
import gc.mda.signal_batch.batch.processor.LatestPositionProcessor;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import gc.mda.signal_batch.batch.reader.PartitionedReader;
import gc.mda.signal_batch.batch.reader.VesselDataReader;
import gc.mda.signal_batch.batch.writer.UpsertWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.task.TaskExecutor;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.BackOffPolicy;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.transaction.PlatformTransactionManager;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.util.HashMap;
import java.util.Map;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class LatestPositionStepConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager queryTransactionManager;
private final LatestPositionProcessor latestPositionProcessor;
private final UpsertWriter upsertWriter;
private final PartitionedReader partitionedReader;
private final ApplicationContext applicationContext;
private final TaskExecutor batchTaskExecutor;
private final TaskExecutor partitionTaskExecutor;
public LatestPositionStepConfig(
JobRepository jobRepository,
@Qualifier("queryTransactionManager") PlatformTransactionManager queryTransactionManager,
LatestPositionProcessor latestPositionProcessor,
UpsertWriter upsertWriter,
PartitionedReader partitionedReader,
ApplicationContext applicationContext,
@Qualifier("batchTaskExecutor") TaskExecutor batchTaskExecutor,
@Qualifier("partitionTaskExecutor") TaskExecutor partitionTaskExecutor) {
this.jobRepository = jobRepository;
this.queryTransactionManager = queryTransactionManager;
this.latestPositionProcessor = latestPositionProcessor;
this.upsertWriter = upsertWriter;
this.partitionedReader = partitionedReader;
this.applicationContext = applicationContext;
this.batchTaskExecutor = batchTaskExecutor;
this.partitionTaskExecutor = partitionTaskExecutor;
}
@Bean
public Step updateLatestPositionStep() {
// InMemoryVesselDataReader를 ApplicationContext에서 가져옴
InMemoryVesselDataReader inMemoryReader = applicationContext.getBean(InMemoryVesselDataReader.class);
return new StepBuilder("updateLatestPositionStep", jobRepository)
.<VesselData, VesselLatestPosition>chunk(10000, queryTransactionManager)
.reader(inMemoryReader) // 메모리 기반 Reader 사용
.processor(latestPositionProcessor.processor())
.writer(upsertWriter.latestPositionWriter())
.faultTolerant()
.retryLimit(3)
.retry(org.springframework.dao.CannotAcquireLockException.class)
.skipLimit(1000)
.skip(org.springframework.dao.EmptyResultDataAccessException.class)
.skip(Exception.class)
.build();
}
// 메모리 기반 Reader 사용으로 제거
// @Bean
// @StepScope
// public ItemReader<VesselData> defaultVesselDataReader() { ... }
@Bean
public Step partitionedLatestPositionStep() {
return new StepBuilder("partitionedLatestPositionStep", jobRepository)
.partitioner("latestPositionPartitioner", dayPartitioner(null))
.partitionHandler(latestPositionPartitionHandler())
.build();
}
@Bean
public TaskExecutorPartitionHandler latestPositionPartitionHandler() {
TaskExecutorPartitionHandler handler = new TaskExecutorPartitionHandler();
handler.setTaskExecutor(partitionTaskExecutor);
handler.setStep(latestPositionSlaveStep());
handler.setGridSize(24);
return handler;
}
@Bean
public Step latestPositionSlaveStep() {
return new StepBuilder("latestPositionSlaveStep", jobRepository)
.<VesselData, VesselLatestPosition>chunk(3000, queryTransactionManager)
.reader(slaveVesselDataReader(null, null, null))
.processor(slaveLatestPositionProcessor())
.writer(upsertWriter.latestPositionWriter())
.faultTolerant()
.retryPolicy(retryPolicy())
.backOffPolicy(exponentialBackOffPolicy())
.skipLimit(50)
.skip(Exception.class)
.noRollback(org.springframework.dao.DuplicateKeyException.class)
.build();
}
@Bean
@StepScope
public ItemReader<VesselData> slaveVesselDataReader(
@Value("#{stepExecutionContext['startTime']}") String startTime,
@Value("#{stepExecutionContext['endTime']}") String endTime,
@Value("#{stepExecutionContext['partition']}") String partition) {
// ApplicationContext에서 VesselDataReader를 가져와서 사용
VesselDataReader reader = applicationContext.getBean(VesselDataReader.class);
return reader.vesselLatestPositionReader(
LocalDateTime.parse(startTime),
LocalDateTime.parse(endTime),
partition
);
}
@Bean
@StepScope
public ItemProcessor<VesselData, VesselLatestPosition> slaveLatestPositionProcessor() {
return latestPositionProcessor.processor();
}
@Bean
@StepScope
public org.springframework.batch.core.partition.support.Partitioner dayPartitioner(
@Value("#{jobParameters['processingDate']}") String processingDateStr) {
LocalDate processingDate = processingDateStr != null ? LocalDate.parse(processingDateStr) : null;
return partitionedReader.dayPartitioner(processingDate);
}
@Bean
public RetryPolicy retryPolicy() {
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(org.springframework.dao.CannotAcquireLockException.class, true);
retryableExceptions.put(org.springframework.dao.DataAccessException.class, true);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(3, retryableExceptions);
return retryPolicy;
}
@Bean
public BackOffPolicy exponentialBackOffPolicy() {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(1000); // 1초
backOffPolicy.setMaxInterval(10000); // 최대 10초
backOffPolicy.setMultiplier(2.0); // 2배씩 증가
return backOffPolicy;
}
}

파일 보기

@ -1,350 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.processor.AccumulatingTileProcessor;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.batch.processor.TileAggregationProcessor;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import gc.mda.signal_batch.batch.reader.PartitionedReader;
import gc.mda.signal_batch.batch.reader.VesselDataReader;
import gc.mda.signal_batch.batch.writer.OptimizedBulkInsertWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.support.CompositeItemProcessor;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.task.TaskExecutor;
import org.springframework.transaction.PlatformTransactionManager;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class TileAggregationStepConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager queryTransactionManager;
private final VesselDataReader vesselDataReader;
private final TileAggregationProcessor tileAggregationProcessor;
private final AccumulatingTileProcessor accumulatingTileProcessor;
private final OptimizedBulkInsertWriter optimizedBulkInsertWriter;
private final PartitionedReader partitionedReader;
private final ApplicationContext applicationContext;
private final TaskExecutor batchTaskExecutor;
private final TaskExecutor partitionTaskExecutor;
public TileAggregationStepConfig(
JobRepository jobRepository,
@Qualifier("queryTransactionManager") PlatformTransactionManager queryTransactionManager,
VesselDataReader vesselDataReader,
TileAggregationProcessor tileAggregationProcessor,
AccumulatingTileProcessor accumulatingTileProcessor,
OptimizedBulkInsertWriter optimizedBulkInsertWriter,
PartitionedReader partitionedReader,
ApplicationContext applicationContext,
@Qualifier("batchTaskExecutor") TaskExecutor batchTaskExecutor,
@Qualifier("partitionTaskExecutor") TaskExecutor partitionTaskExecutor) {
this.jobRepository = jobRepository;
this.queryTransactionManager = queryTransactionManager;
this.vesselDataReader = vesselDataReader;
this.tileAggregationProcessor = tileAggregationProcessor;
this.accumulatingTileProcessor = accumulatingTileProcessor;
this.optimizedBulkInsertWriter = optimizedBulkInsertWriter;
this.partitionedReader = partitionedReader;
this.applicationContext = applicationContext;
this.batchTaskExecutor = batchTaskExecutor;
this.partitionTaskExecutor = partitionTaskExecutor;
}
@Bean
public Step aggregateTileStatisticsStep() {
// InMemoryVesselDataReader를 ApplicationContext에서 가져옴
InMemoryVesselDataReader inMemoryReader = applicationContext.getBean(InMemoryVesselDataReader.class);
return new StepBuilder("aggregateTileStatisticsStep", jobRepository)
.<VesselData, TileStatistics>chunk(50000, queryTransactionManager)
.reader(inMemoryReader) // 메모리 기반 Reader 사용
.processor(accumulatingTileProcessor)
.writer(new AccumulatedTileWriter())
.listener(tileAggregationStepListener())
.faultTolerant()
.skipLimit(1000)
.skip(Exception.class)
.build();
}
@Bean
@StepScope
public ItemReader<VesselData> tileDataReader(
@Value("#{jobParameters['startTime']}") String startTimeStr,
@Value("#{jobParameters['endTime']}") String endTimeStr) {
return new ItemReader<VesselData>() {
private ItemReader<VesselData> delegate;
private boolean initialized = false;
@Override
public VesselData read() throws Exception {
if (!initialized) {
LocalDateTime startTime = startTimeStr != null ? LocalDateTime.parse(startTimeStr) : null;
LocalDateTime endTime = endTimeStr != null ? LocalDateTime.parse(endTimeStr) : null;
log.info("Creating tileDataReader with startTime: {}, endTime: {}", startTime, endTime);
// 기존 reader close
if (delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
} catch (Exception e) {
log.debug("Failed to close previous reader: {}", e.getMessage());
}
}
// 최신 위치만 사용
delegate = vesselDataReader.vesselLatestPositionReader(startTime, endTime, null);
((org.springframework.batch.item.ItemStream) delegate).open(
org.springframework.batch.core.scope.context.StepSynchronizationManager
.getContext().getStepExecution().getExecutionContext());
initialized = true;
}
VesselData data = delegate.read();
// Reader 종료 close
if (data == null && delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
delegate = null;
initialized = false;
} catch (Exception e) {
log.debug("Failed to close reader on completion: {}", e.getMessage());
}
}
return data;
}
};
}
@Bean
public Step partitionedTileAggregationStep() {
return new StepBuilder("partitionedTileAggregationStep", jobRepository)
.partitioner("tileAggregationPartitioner", partitionedReader.dayPartitioner(null))
.partitionHandler(tileAggregationPartitionHandler())
.build();
}
@Bean
public TaskExecutorPartitionHandler tileAggregationPartitionHandler() {
TaskExecutorPartitionHandler handler = new TaskExecutorPartitionHandler();
handler.setTaskExecutor(partitionTaskExecutor);
handler.setStep(tileAggregationSlaveStep());
handler.setGridSize(24);
return handler;
}
@Bean
public Step tileAggregationSlaveStep() {
return new StepBuilder("tileAggregationSlaveStep", jobRepository)
.<List<VesselData>, List<TileStatistics>>chunk(50, queryTransactionManager)
.reader(slaveTileBatchVesselDataReader(null, null, null))
.processor(slaveTileProcessor(null, null))
.writer(optimizedBulkInsertWriter.tileStatisticsBulkWriter())
.faultTolerant()
.skipLimit(100)
.skip(Exception.class)
.build();
}
@Bean
@StepScope
public ItemReader<List<VesselData>> tileBatchVesselDataReader(
@Value("#{jobParameters['startTime']}") String startTimeStr,
@Value("#{jobParameters['endTime']}") String endTimeStr) {
LocalDateTime startTime = startTimeStr != null ? LocalDateTime.parse(startTimeStr) : null;
LocalDateTime endTime = endTimeStr != null ? LocalDateTime.parse(endTimeStr) : null;
return new ItemReader<List<VesselData>>() {
private ItemReader<VesselData> delegate = vesselDataReader.vesselDataPagingReader(startTime, endTime, null);
@Override
public List<VesselData> read() throws Exception {
List<VesselData> batch = new java.util.ArrayList<>();
for (int i = 0; i < 1000; i++) {
VesselData item = delegate.read();
if (item == null) {
break;
}
batch.add(item);
}
return batch.isEmpty() ? null : batch;
}
};
}
@Bean
@StepScope
public ItemReader<List<VesselData>> slaveTileBatchVesselDataReader(
@Value("#{stepExecutionContext['startTime']}") String startTime,
@Value("#{stepExecutionContext['endTime']}") String endTime,
@Value("#{stepExecutionContext['partition']}") String partition) {
return new ItemReader<List<VesselData>>() {
private ItemReader<VesselData> delegate = vesselDataReader.vesselDataPagingReader(
startTime != null ? LocalDateTime.parse(startTime) : null,
endTime != null ? LocalDateTime.parse(endTime) : null,
partition
);
@Override
public List<VesselData> read() throws Exception {
List<VesselData> batch = new java.util.ArrayList<>();
for (int i = 0; i < 1000; i++) {
VesselData item = delegate.read();
if (item == null) {
break;
}
batch.add(item);
}
return batch.isEmpty() ? null : batch;
}
};
}
@Bean
@StepScope
public ItemProcessor<List<VesselData>, List<TileStatistics>> slaveTileProcessor(
@Value("#{jobParameters['tileLevel']}") Integer tileLevel,
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
// 여러 레벨 처리를 위한 복합 프로세서
if (tileLevel == null) {
CompositeItemProcessor<List<VesselData>, List<TileStatistics>> compositeProcessor =
new CompositeItemProcessor<>();
compositeProcessor.setDelegates(Arrays.asList(
tileAggregationProcessor.batchProcessor(0, bucketMinutes),
tileAggregationProcessor.batchProcessor(1, bucketMinutes),
tileAggregationProcessor.batchProcessor(2, bucketMinutes)
));
return compositeProcessor;
} else {
return tileAggregationProcessor.batchProcessor(tileLevel, bucketMinutes);
}
}
@Bean
@StepScope
public ItemProcessor<VesselData, List<TileStatistics>> batchTileProcessor(
@Value("#{jobParameters['tileLevel']}") Integer tileLevel,
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int level = (tileLevel != null) ? tileLevel : 1;
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
return new ItemProcessor<VesselData, List<TileStatistics>>() {
private final List<VesselData> buffer = new ArrayList<>(1000);
@Override
public List<TileStatistics> process(VesselData item) throws Exception {
if (item == null || !item.isValidPosition()) {
return null;
}
buffer.add(item);
// 버퍼가 차면 처리
if (buffer.size() >= 1000) {
List<TileStatistics> result = tileAggregationProcessor
.batchProcessor(level, bucketMinutes)
.process(new ArrayList<>(buffer));
buffer.clear();
return result;
}
return null;
}
};
}
/**
* 누적된 결과를 번에 처리하는 Writer
*/
private class AccumulatedTileWriter implements ItemWriter<TileStatistics> {
@Override
public void write(Chunk<? extends TileStatistics> chunk) throws Exception {
// 대부분의 아이템은 null일 것임 (processor에서 null 반환)
// 실제 데이터는 Step 종료 처리됨
log.debug("AccumulatedTileWriter called with {} items", chunk.size());
}
}
/**
* Step 종료 누적된 데이터를 처리하는 리스너
*/
@Bean
@StepScope
public org.springframework.batch.core.StepExecutionListener tileAggregationStepListener() {
return new org.springframework.batch.core.StepExecutionListener() {
@Override
public void beforeStep(org.springframework.batch.core.StepExecution stepExecution) {
// beforeStep에서는 특별한 처리 없음
}
@Override
public org.springframework.batch.core.ExitStatus afterStep(org.springframework.batch.core.StepExecution stepExecution) {
log.info("[TileAggregationStepListener] afterStep called");
try {
// AccumulatingTileProcessor에서 직접 결과 가져오기
List<TileStatistics> accumulatedTiles = accumulatingTileProcessor.getAccumulatedResults();
log.info("[TileAggregationStepListener] Retrieved {} tiles from processor",
accumulatedTiles != null ? accumulatedTiles.size() : 0);
if (accumulatedTiles != null && !accumulatedTiles.isEmpty()) {
log.info("Writing {} accumulated tiles to database", accumulatedTiles.size());
// Bulk Writer를 사용하여 번에 저장
ItemWriter<List<TileStatistics>> writer = optimizedBulkInsertWriter.tileStatisticsBulkWriter();
Chunk<List<TileStatistics>> chunk = new Chunk<>();
chunk.add(accumulatedTiles);
writer.write(chunk);
log.info("Successfully wrote all accumulated tiles");
stepExecution.setWriteCount(accumulatedTiles.size());
} else {
log.warn("[TileAggregationStepListener] No tiles to write!");
}
return stepExecution.getExitStatus();
} catch (Exception e) {
log.error("Failed to write accumulated tiles", e);
return org.springframework.batch.core.ExitStatus.FAILED;
}
}
};
}
}

파일 보기

@ -1,78 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.global.util.SharedDataJobListener;
import gc.mda.signal_batch.global.util.VesselDataHolder;
import gc.mda.signal_batch.batch.listener.JobCompletionListener;
import gc.mda.signal_batch.batch.listener.PerformanceOptimizationListener;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParametersValidator;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.job.DefaultJobParametersValidator;
import org.springframework.batch.core.job.builder.JobBuilder;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class VesselAggregationJobConfig {
private final JobRepository jobRepository;
private final LatestPositionStepConfig latestPositionStepConfig;
private final TileAggregationStepConfig tileAggregationStepConfig;
private final AreaStatisticsStepConfig areaStatisticsStepConfig;
private final JobCompletionListener jobCompletionListener;
private final SharedDataJobListener sharedDataJobListener;
private final VesselDataHolder vesselDataHolder;
private final PerformanceOptimizationListener performanceOptimizationListener;
@Bean
public Job vesselAggregationJob() {
return new JobBuilder("vesselAggregationJob", jobRepository)
.incrementer(new RunIdIncrementer())
.validator(jobParametersValidator())
.listener(jobCompletionListener)
.listener(sharedDataJobListener) // 데이터 로드 리스너 추가
.listener(performanceOptimizationListener) // 성능 최적화 리스너 추가
.start(latestPositionStepConfig.updateLatestPositionStep())
.next(tileAggregationStepConfig.aggregateTileStatisticsStep())
.next(areaStatisticsStepConfig.aggregateAreaStatisticsStep())
.build();
}
@Bean
@StepScope
public InMemoryVesselDataReader inMemoryVesselDataReader() {
return new InMemoryVesselDataReader(vesselDataHolder);
}
@Bean
public Job vesselDailyPositionJob() {
return new JobBuilder("vesselDailyPositionJob", jobRepository)
.incrementer(new RunIdIncrementer())
.listener(jobCompletionListener)
.start(latestPositionStepConfig.partitionedLatestPositionStep())
.next(tileAggregationStepConfig.partitionedTileAggregationStep())
.next(areaStatisticsStepConfig.partitionedAreaStatisticsStep())
.build();
}
@Bean
public JobParametersValidator jobParametersValidator() {
DefaultJobParametersValidator validator = new DefaultJobParametersValidator();
validator.setRequiredKeys(new String[]{"startTime", "endTime"});
validator.setOptionalKeys(new String[]{"executionTime", "processingDate",
"tileLevel", "partitionCount"});
return validator;
}
}

파일 보기

@ -29,10 +29,6 @@ public class VesselBatchScheduler {
@Qualifier("asyncJobLauncher")
private JobLauncher jobLauncher;
@Autowired
@Qualifier("vesselAggregationJob")
private Job vesselAggregationJob;
@Autowired
@Qualifier("vesselTrackAggregationJob")
private Job vesselTrackAggregationJob;
@ -45,55 +41,41 @@ public class VesselBatchScheduler {
@Qualifier("dailyAggregationJob")
private Job dailyAggregationJob;
@Autowired(required = false)
@Qualifier("aisTargetImportJob")
private Job aisTargetImportJob;
@Value("${vessel.batch.scheduler.enabled:true}")
private boolean schedulerEnabled;
@Value("${vessel.batch.scheduler.incremental.delay-minutes:2}")
private int incrementalDelayMinutes;
@Value("${vessel.batch.abnormal-detection.enabled:true}")
private boolean abnormalDetectionEnabled;
/**
* 5분 단위 증분 처리 (3분 지연으로 데이터 수집 대기)
* 5분마다 실행 (0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55분)
* S&P AIS API 수집 ( 1분 15초)
* 캐시에 최신 위치 저장 5분 집계 Job에서 활용
*/
@Scheduled(cron = "0 3,8,13,18,23,28,33,38,43,48,53,58 * * * *")
public void runIncrementalAggregation() {
if (!schedulerEnabled) {
log.debug("Scheduler is disabled");
@Scheduled(cron = "15 * * * * *")
public void runAisTargetImport() {
if (!schedulerEnabled || aisTargetImportJob == null) {
return;
}
try {
// 3분 데이터를 처리 (데이터 수집 지연 고려)
LocalDateTime now = LocalDateTime.now();
LocalDateTime endTime = now.minusMinutes(incrementalDelayMinutes);
LocalDateTime startTime = endTime.minusMinutes(5);
log.info("Starting incremental aggregation for period: {} to {}", startTime, endTime);
JobParameters params = new JobParametersBuilder()
.addString("startTime", startTime.withNano(0).toString())
.addString("endTime", endTime.withNano(0).toString())
.addString("jobType", "INCREMENTAL")
.addString("timeBucketMinutes", "5") // 5분 단위 집계
// executionTime 제거 - startTime/endTime만으로 고유성 보장
.addString("executionTime", now.toString())
.toJobParameters();
JobExecution execution = jobLauncher.run(vesselAggregationJob, params);
log.info("Incremental aggregation started with execution ID: {}", execution.getId());
JobExecution execution = jobLauncher.run(aisTargetImportJob, params);
log.debug("[AIS Import] 실행 ID: {}", execution.getId());
} catch (JobExecutionAlreadyRunningException e) {
log.warn("Previous incremental job is still running, skipping this execution");
log.warn("[AIS Import] 이전 Job 실행 중, 스킵");
} catch (Exception e) {
log.error("Failed to start incremental aggregation", e);
// 중복 오류인 경우 경고로만 처리
if (e.getMessage().contains("중복된 키") || e.getMessage().contains("duplicate key")) {
log.warn("Duplicate key detected, job may have already processed this time bucket");
}
log.error("[AIS Import] Job 실행 실패", e);
}
}
//
/**
* 5분 단위 궤적 집계 처리 (4분 지연으로 위치 집계 이후 실행)
@ -118,7 +100,7 @@ public class VesselBatchScheduler {
try {
// 4분 데이터를 처리 (위치 집계 완료 )
LocalDateTime now = LocalDateTime.now();
LocalDateTime endTime = now.minusMinutes(incrementalDelayMinutes + 1); // 3+1=4분 지연
LocalDateTime endTime = now.minusMinutes(4); // 4분 지연 (캐시 기반이므로 고정)
LocalDateTime startTime = endTime.minusMinutes(5);
// 5분 버킷 계산

파일 보기

@ -1,194 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto;
import gc.mda.signal_batch.domain.vessel.service.VesselLatestPositionCache;
import gc.mda.signal_batch.global.util.ShipKindCodeConverter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.math.BigDecimal;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.util.List;
/**
* 선박 최신 위치 캐시 갱신 스케줄러
*
* 실행 주기: 1분마다 (매분 0초)
* 데이터 소스: Collect DB (sig_test 테이블)
* 처리 방식: 읽기 전용 (DB에 쓰기 없음, 캐시만 업데이트)
*
* 동작 흐름:
* 1. 매분 0초에 실행
* 2. 최근 2분치 데이터를 DB에서 조회 (수집 지연 고려)
* 3. DISTINCT ON으로 선박별 최신 위치만 추출
* 4. 캐시에 업데이트
*
* 기존 배치와의 관계:
* - 기존 5분 배치는 그대로 유지 (DB 저장)
* - 스케줄러는 캐시만 관리 (읽기 전용)
* - 충돌 없음
*/
@Slf4j
@Component
@Profile("!query") // query 프로파일에서는 캐시 갱신 스케줄러 비활성화
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.cache.latest-position.enabled", havingValue = "true", matchIfMissing = false)
public class VesselPositionCacheRefreshScheduler {
@Qualifier("collectJdbcTemplate")
private final JdbcTemplate collectJdbcTemplate;
private final VesselLatestPositionCache cache;
@Value("${vessel.batch.cache.latest-position.refresh-interval-minutes:2}")
private int refreshIntervalMinutes;
private volatile boolean isRunning = false;
/**
* 1분마다 캐시 갱신
* 매분 0초에 실행 (: 10:00:00, 10:01:00, 10:02:00...)
*/
@Scheduled(cron = "0 * * * * *")
public void refreshCache() {
// 동시 실행 방지
if (isRunning) {
log.warn("Previous cache refresh is still running, skipping this execution");
return;
}
isRunning = true;
long startTime = System.currentTimeMillis();
try {
// 최근 N분치 데이터 조회 (수집 지연 고려)
List<RecentVesselPositionDto> positions = fetchLatestPositions();
if (positions.isEmpty()) {
log.warn("No vessel positions found in last {} minutes", refreshIntervalMinutes);
return;
}
// 캐시 업데이트
cache.putAll(positions);
long duration = System.currentTimeMillis() - startTime;
log.info("Cache refresh completed in {}ms (fetched {} positions from DB)",
duration, positions.size());
// 캐시 통계 로깅 (5분마다만)
if (LocalDateTime.now().getMinute() % 5 == 0) {
logCacheStats();
}
} catch (Exception e) {
log.error("Failed to refresh cache", e);
} finally {
isRunning = false;
}
}
/**
* DB에서 최신 위치 데이터 조회
*/
private List<RecentVesselPositionDto> fetchLatestPositions() {
LocalDateTime endTime = LocalDateTime.now();
LocalDateTime startTime = endTime.minusMinutes(refreshIntervalMinutes);
String sql = """
SELECT DISTINCT ON (sig_src_cd, target_id)
sig_src_cd,
target_id,
lon,
lat,
sog,
cog,
ship_nm,
ship_ty,
message_time as last_update
FROM signal.sig_test
WHERE message_time >= ? AND message_time < ?
AND sig_src_cd != '000005'
AND length(target_id) > 5
AND lat BETWEEN -90 AND 90
AND lon BETWEEN -180 AND 180
ORDER BY sig_src_cd, target_id, message_time DESC
""";
try {
return collectJdbcTemplate.query(sql,
new Object[]{Timestamp.valueOf(startTime), Timestamp.valueOf(endTime)},
new VesselPositionRowMapper());
} catch (Exception e) {
log.error("Failed to fetch positions from DB", e);
return List.of();
}
}
/**
* 캐시 통계 로깅
*/
private void logCacheStats() {
try {
VesselLatestPositionCache.CacheStats stats = cache.getStats();
log.info("Cache Stats - Size: {}, HitRate: {}%, MissRate: {}%, Hits: {}, Misses: {}",
stats.currentSize(),
String.format("%.2f", stats.hitRate()),
String.format("%.2f", stats.missRate()),
stats.hitCount(),
stats.missCount());
} catch (Exception e) {
log.warn("Failed to get cache stats", e);
}
}
/**
* RowMapper 구현
*/
private static class VesselPositionRowMapper implements RowMapper<RecentVesselPositionDto> {
@Override
public RecentVesselPositionDto mapRow(ResultSet rs, int rowNum) throws SQLException {
String sigSrcCd = rs.getString("sig_src_cd");
String targetId = rs.getString("target_id");
String shipTy = rs.getString("ship_ty");
// shipKindCode 계산
String shipKindCode = ShipKindCodeConverter.getShipKindCode(sigSrcCd, shipTy);
// nationalCode 계산
String nationalCode;
if ("000001".equals(sigSrcCd) && targetId != null && targetId.length() >= 3) {
nationalCode = targetId.substring(0, 3);
} else {
nationalCode = "440"; // 기본값
}
return RecentVesselPositionDto.builder()
.sigSrcCd(sigSrcCd)
.targetId(targetId)
.lon(rs.getDouble("lon"))
.lat(rs.getDouble("lat"))
.sog(rs.getBigDecimal("sog"))
.cog(rs.getBigDecimal("cog"))
.shipNm(rs.getString("ship_nm"))
.shipTy(shipTy)
.shipKindCode(shipKindCode)
.nationalCode(nationalCode)
.lastUpdate(rs.getTimestamp("last_update") != null ?
rs.getTimestamp("last_update").toLocalDateTime() : null)
.build();
}
}
}

파일 보기

@ -0,0 +1,239 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.transaction.PlatformTransactionManager;
import javax.sql.DataSource;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.util.*;
/**
* HourlyJob 편승: 정적 정보 COALESCE + CDC t_vessel_static INSERT
*
* 전략:
* 1. COALESCE: 캐시에서 직전 1시간 데이터 필드별 lastNonEmpty 조합
* 2. CDC: 이전 저장 레코드와 비교 변경 시에만 INSERT
*
* 조회: WHERE mmsi=? AND time_bucket <= ? ORDER BY time_bucket DESC LIMIT 1
*/
@Slf4j
@Configuration
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class VesselStaticStepConfig {
private final JobRepository jobRepository;
private final DataSource queryDataSource;
private final PlatformTransactionManager transactionManager;
private final AisTargetCacheManager cacheManager;
public VesselStaticStepConfig(
JobRepository jobRepository,
@Qualifier("queryDataSource") DataSource queryDataSource,
@Qualifier("queryTransactionManager") PlatformTransactionManager transactionManager,
AisTargetCacheManager cacheManager) {
this.jobRepository = jobRepository;
this.queryDataSource = queryDataSource;
this.transactionManager = transactionManager;
this.cacheManager = cacheManager;
}
@Bean
public Step vesselStaticSyncStep() {
return new StepBuilder("vesselStaticSyncStep", jobRepository)
.tasklet((contribution, chunkContext) -> {
// 1. 캐시에서 전체 데이터 MMSI별 그룹
Collection<AisTargetEntity> allEntities = cacheManager.getAllValues();
if (allEntities.isEmpty()) {
log.debug("캐시에 데이터 없음 — t_vessel_static 동기화 스킵");
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}
// 시간 버킷: 현재 시각의 정각
LocalDateTime hourBucket = LocalDateTime.now()
.withMinute(0).withSecond(0).withNano(0);
// MMSI별 최신 데이터 (필드별 COALESCE)
Map<String, AisTargetEntity> coalesced = coalesceByMmsi(allEntities);
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
// 2. CDC: 이전 레코드와 비교 변경 시에만 INSERT
String selectPrevSql = """
SELECT imo, name, callsign, vessel_type, extra_info,
length, width, draught, destination, status,
signal_kind_code, class_type
FROM signal.t_vessel_static
WHERE mmsi = ? AND time_bucket <= ?
ORDER BY time_bucket DESC
LIMIT 1
""";
String insertSql = """
INSERT INTO signal.t_vessel_static (
mmsi, time_bucket, imo, name, callsign,
vessel_type, extra_info, length, width, draught,
destination, eta, status, signal_kind_code, class_type
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (mmsi, time_bucket) DO UPDATE SET
imo = EXCLUDED.imo,
name = EXCLUDED.name,
callsign = EXCLUDED.callsign,
vessel_type = EXCLUDED.vessel_type,
extra_info = EXCLUDED.extra_info,
length = EXCLUDED.length,
width = EXCLUDED.width,
draught = EXCLUDED.draught,
destination = EXCLUDED.destination,
eta = EXCLUDED.eta,
status = EXCLUDED.status,
signal_kind_code = EXCLUDED.signal_kind_code,
class_type = EXCLUDED.class_type
""";
Timestamp hourBucketTs = Timestamp.valueOf(hourBucket);
int inserted = 0;
int skipped = 0;
List<Object[]> batchArgs = new ArrayList<>();
for (Map.Entry<String, AisTargetEntity> entry : coalesced.entrySet()) {
String mmsi = entry.getKey();
AisTargetEntity current = entry.getValue();
// 이전 레코드 조회
boolean changed;
try {
Map<String, Object> prev = jdbcTemplate.queryForMap(
selectPrevSql, mmsi, hourBucketTs);
changed = hasStaticInfoChanged(current, prev);
} catch (org.springframework.dao.EmptyResultDataAccessException e) {
// 이전 레코드 없음 INSERT
changed = true;
}
if (changed) {
Timestamp etaTs = current.getEta() != null
? Timestamp.from(current.getEta().toInstant())
: null;
batchArgs.add(new Object[] {
mmsi, hourBucketTs,
current.getImo(), current.getName(), current.getCallsign(),
current.getVesselType(), current.getExtraInfo(),
current.getLength(), current.getWidth(), current.getDraught(),
current.getDestination(), etaTs, current.getStatus(),
current.getSignalKindCode(), current.getClassType()
});
inserted++;
} else {
skipped++;
}
}
if (!batchArgs.isEmpty()) {
jdbcTemplate.batchUpdate(insertSql, batchArgs);
}
log.info("t_vessel_static 동기화 완료: 총 {} 선박, INSERT {} 건, CDC 스킵 {} 건",
coalesced.size(), inserted, skipped);
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}, transactionManager)
.build();
}
/**
* MMSI별 필드 COALESCE: 필드별 마지막 non-empty 조합
*/
private Map<String, AisTargetEntity> coalesceByMmsi(Collection<AisTargetEntity> entities) {
Map<String, AisTargetEntity> result = new LinkedHashMap<>();
for (AisTargetEntity entity : entities) {
if (entity.getMmsi() == null) continue;
result.merge(entity.getMmsi(), entity, (existing, incoming) -> {
// 최신 타임스탬프 기준, 필드별 non-empty 우선
return AisTargetEntity.builder()
.mmsi(existing.getMmsi())
.imo(coalesce(incoming.getImo(), existing.getImo()))
.name(coalesceStr(incoming.getName(), existing.getName()))
.callsign(coalesceStr(incoming.getCallsign(), existing.getCallsign()))
.vesselType(coalesceStr(incoming.getVesselType(), existing.getVesselType()))
.extraInfo(coalesceStr(incoming.getExtraInfo(), existing.getExtraInfo()))
.length(coalesce(incoming.getLength(), existing.getLength()))
.width(coalesce(incoming.getWidth(), existing.getWidth()))
.draught(coalesce(incoming.getDraught(), existing.getDraught()))
.destination(coalesceStr(incoming.getDestination(), existing.getDestination()))
.eta(coalesce(incoming.getEta(), existing.getEta()))
.status(coalesceStr(incoming.getStatus(), existing.getStatus()))
.signalKindCode(coalesceStr(incoming.getSignalKindCode(), existing.getSignalKindCode()))
.classType(coalesceStr(incoming.getClassType(), existing.getClassType()))
.messageTimestamp(coalesce(incoming.getMessageTimestamp(), existing.getMessageTimestamp()))
.build();
});
}
return result;
}
/**
* CDC: 정적 정보 변경 여부 비교
*/
private boolean hasStaticInfoChanged(AisTargetEntity current, Map<String, Object> prev) {
return !Objects.equals(current.getImo(), toLong(prev.get("imo")))
|| !Objects.equals(current.getName(), prev.get("name"))
|| !Objects.equals(current.getCallsign(), prev.get("callsign"))
|| !Objects.equals(current.getVesselType(), prev.get("vessel_type"))
|| !Objects.equals(current.getExtraInfo(), prev.get("extra_info"))
|| !Objects.equals(current.getLength(), toInt(prev.get("length")))
|| !Objects.equals(current.getWidth(), toInt(prev.get("width")))
|| !Objects.equals(current.getDraught(), toDouble(prev.get("draught")))
|| !Objects.equals(current.getDestination(), prev.get("destination"))
|| !Objects.equals(current.getStatus(), prev.get("status"))
|| !Objects.equals(current.getSignalKindCode(), prev.get("signal_kind_code"))
|| !Objects.equals(current.getClassType(), prev.get("class_type"));
}
private <T> T coalesce(T a, T b) {
return a != null ? a : b;
}
private String coalesceStr(String a, String b) {
return (a != null && !a.isBlank()) ? a : b;
}
private Long toLong(Object val) {
if (val == null) return null;
if (val instanceof Long l) return l;
if (val instanceof Number n) return n.longValue();
return null;
}
private Integer toInt(Object val) {
if (val == null) return null;
if (val instanceof Integer i) return i;
if (val instanceof Number n) return n.intValue();
return null;
}
private Double toDouble(Object val) {
if (val == null) return null;
if (val instanceof Double d) return d;
if (val instanceof Number n) return n.doubleValue();
return null;
}
}

파일 보기

@ -1,6 +1,6 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.global.util.VesselTrackDataJobListener;
import gc.mda.signal_batch.batch.listener.CacheBasedTrackJobListener;
import gc.mda.signal_batch.batch.listener.JobCompletionListener;
import gc.mda.signal_batch.batch.listener.PerformanceOptimizationListener;
import lombok.RequiredArgsConstructor;
@ -25,8 +25,9 @@ public class VesselTrackAggregationJobConfig {
private final JobRepository jobRepository;
private final VesselTrackStepConfig vesselTrackStepConfig;
private final AisPositionSyncStepConfig aisPositionSyncStepConfig;
private final JobCompletionListener jobCompletionListener;
private final VesselTrackDataJobListener vesselTrackDataJobListener;
private final CacheBasedTrackJobListener cacheBasedTrackJobListener;
private final PerformanceOptimizationListener performanceOptimizationListener;
@Bean
@ -35,11 +36,12 @@ public class VesselTrackAggregationJobConfig {
.incrementer(new RunIdIncrementer())
.validator(trackJobParametersValidator())
.listener(jobCompletionListener)
.listener(vesselTrackDataJobListener)
.listener(cacheBasedTrackJobListener)
.listener(performanceOptimizationListener) // 성능 최적화 리스너 추가
.start(vesselTrackStepConfig.vesselTrackStep())
.next(vesselTrackStepConfig.gridTrackSummaryStep())
.next(vesselTrackStepConfig.areaTrackSummaryStep())
.next(aisPositionSyncStepConfig.aisPositionSyncStep())
.build();
}

파일 보기

@ -7,8 +7,8 @@ import gc.mda.signal_batch.domain.vessel.service.VesselPreviousBucketCache;
import gc.mda.signal_batch.batch.processor.VesselTrackProcessor;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector.AbnormalDetectionResult;
import gc.mda.signal_batch.batch.reader.InMemoryVesselTrackDataReader;
import gc.mda.signal_batch.global.util.VesselTrackDataHolder;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.batch.reader.CacheBasedVesselTrackDataReader;
import gc.mda.signal_batch.global.util.TrackClippingUtils;
import gc.mda.signal_batch.batch.writer.VesselTrackBulkWriter;
import gc.mda.signal_batch.batch.writer.AbnormalTrackWriter;
@ -36,9 +36,9 @@ import javax.sql.DataSource;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
import jakarta.annotation.PostConstruct;
@ -53,7 +53,7 @@ public class VesselTrackStepConfig {
private final PlatformTransactionManager transactionManager;
private final DataSource queryDataSource;
private final VesselTrackProcessor vesselTrackProcessor;
private final VesselTrackDataHolder vesselTrackDataHolder;
private final AisTargetCacheManager aisTargetCacheManager;
private final VesselTrackBulkWriter vesselTrackBulkWriter;
private final TrackClippingUtils trackClippingUtils;
private final AbnormalTrackDetector abnormalTrackDetector;
@ -61,14 +61,14 @@ public class VesselTrackStepConfig {
private final VesselPreviousBucketCache previousBucketCache;
// 현재 처리 중인 버킷의 종료 위치 저장 (캐시 업데이트용)
private final Map<String, VesselBucketPositionDto> currentBucketEndPositions = new HashMap<>();
private final Map<String, VesselBucketPositionDto> currentBucketEndPositions = new ConcurrentHashMap<>();
public VesselTrackStepConfig(
JobRepository jobRepository,
PlatformTransactionManager transactionManager,
@Qualifier("queryDataSource") DataSource queryDataSource,
VesselTrackProcessor vesselTrackProcessor,
VesselTrackDataHolder vesselTrackDataHolder,
AisTargetCacheManager aisTargetCacheManager,
VesselTrackBulkWriter vesselTrackBulkWriter,
TrackClippingUtils trackClippingUtils,
AbnormalTrackDetector abnormalTrackDetector,
@ -78,7 +78,7 @@ public class VesselTrackStepConfig {
this.transactionManager = transactionManager;
this.queryDataSource = queryDataSource;
this.vesselTrackProcessor = vesselTrackProcessor;
this.vesselTrackDataHolder = vesselTrackDataHolder;
this.aisTargetCacheManager = aisTargetCacheManager;
this.vesselTrackBulkWriter = vesselTrackBulkWriter;
this.trackClippingUtils = trackClippingUtils;
this.abnormalTrackDetector = abnormalTrackDetector;
@ -89,6 +89,9 @@ public class VesselTrackStepConfig {
@Value("${vessel.batch.chunk-size:1000}")
private int chunkSize;
@Value("${partition.retention.tables.t_vessel_tracks_5min.retention-days:7}")
private int trackRetentionDays;
@PostConstruct
public void init() {
// 5분 Job의 이름을 명시적으로 설정
@ -108,8 +111,8 @@ public class VesselTrackStepConfig {
@Bean
@StepScope
public InMemoryVesselTrackDataReader trackDataReader() {
return new InMemoryVesselTrackDataReader(vesselTrackDataHolder, chunkSize);
public CacheBasedVesselTrackDataReader trackDataReader() {
return new CacheBasedVesselTrackDataReader(aisTargetCacheManager, trackRetentionDays);
}
@Bean
@ -124,7 +127,7 @@ public class VesselTrackStepConfig {
// 2. 이전 버킷 위치 조회 (캐시 + DB Fallback)
List<String> vesselKeys = tracks.stream()
.map(track -> track.getSigSrcCd() + ":" + track.getTargetId())
.map(VesselTrack::getMmsi)
.distinct()
.collect(Collectors.toList());
@ -138,10 +141,9 @@ public class VesselTrackStepConfig {
boolean isAbnormal = false;
String abnormalReason = "";
// 선박/항공기 구분
boolean isAircraft = "000019".equals(track.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0; // 항공기 300, 선박 100
double distanceLimit = isAircraft ? 30.0 : 10.0; // 항공기 30nm, 선박 10nm
// S&P AIS API는 선박 전용 항공기 구분 불필요
double speedLimit = 100.0;
double distanceLimit = 10.0;
// 버킷 평균속도 체크
if (track.getAvgSpeed() != null && track.getAvgSpeed().doubleValue() >= speedLimit) {
@ -155,9 +157,9 @@ public class VesselTrackStepConfig {
abnormalReason = "within_bucket_distance";
}
// 버킷 점프 검출 (NEW!)
// 버킷 점프 검출
if (!isAbnormal && track.getStartPosition() != null) {
String vesselKey = track.getSigSrcCd() + ":" + track.getTargetId();
String vesselKey = track.getMmsi();
VesselBucketPositionDto prevPosition = previousPositions.get(vesselKey);
if (prevPosition != null) {
@ -166,10 +168,9 @@ public class VesselTrackStepConfig {
track.getStartPosition().getLat(), track.getStartPosition().getLon()
);
// 위성 AIS는 2시간, 일반 신호는 15분 범위 체크
boolean isSatellite = "000016".equals(track.getSigSrcCd());
double maxGapMinutes = isSatellite ? 120.0 : 15.0;
double expectedMaxDistance = isAircraft ? (maxGapMinutes / 60.0 * 300.0) : (maxGapMinutes / 60.0 * 50.0);
// S&P AIS API: 위성/지상 구분 불가 보수적 30분 gap 허용
double maxGapMinutes = 30.0;
double expectedMaxDistance = maxGapMinutes / 60.0 * 50.0;
if (jumpDistance > expectedMaxDistance) {
isAbnormal = true;
@ -196,10 +197,8 @@ public class VesselTrackStepConfig {
// 정상 궤적의 종료 위치 저장 (캐시 업데이트용)
if (track.getEndPosition() != null) {
String vesselKey = track.getSigSrcCd() + ":" + track.getTargetId();
currentBucketEndPositions.put(vesselKey, VesselBucketPositionDto.builder()
.sigSrcCd(track.getSigSrcCd())
.targetId(track.getTargetId())
currentBucketEndPositions.put(track.getMmsi(), VesselBucketPositionDto.builder()
.mmsi(track.getMmsi())
.endLon(track.getEndPosition().getLon())
.endLat(track.getEndPosition().getLat())
.endTime(track.getEndPosition().getTime())
@ -232,15 +231,14 @@ public class VesselTrackStepConfig {
abnormalTrackWriter.setJobName("vesselTrackAggregationJob");
List<AbnormalTrackDetector.AbnormalSegment> segments = new ArrayList<>();
Map<String, Object> details = new HashMap<>();
Map<String, Object> details = new ConcurrentHashMap<>();
details.put("avgSpeed", track.getAvgSpeed());
details.put("distanceNm", track.getDistanceNm());
details.put("timeBucket", track.getTimeBucket());
// 선박/항공기 구분
boolean isAircraft = "000019".equals(track.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0;
double distanceLimit = isAircraft ? 30.0 : 10.0;
// S&P AIS API는 선박 전용
double speedLimit = 100.0;
double distanceLimit = 10.0;
// 비정상 유형 결정
String abnormalType = "abnormal_5min";
@ -339,17 +337,16 @@ public class VesselTrackStepConfig {
String sql = """
INSERT INTO signal.t_grid_vessel_tracks (
haegu_no, sig_src_cd, target_id, time_bucket,
haegu_no, mmsi, time_bucket,
distance_nm, avg_speed, point_count, entry_time, exit_time
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (haegu_no, sig_src_cd, target_id, time_bucket) DO NOTHING
) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (haegu_no, mmsi, time_bucket) DO NOTHING
""";
List<Object[]> args = allClippedTracks.stream()
.map(track -> new Object[] {
track.getHaeguNo(),
track.getSigSrcCd(),
track.getTargetId(),
track.getMmsi(),
Timestamp.valueOf(track.getTimeBucket()),
track.getDistanceNm(),
track.getAvgSpeed(),
@ -385,17 +382,16 @@ public class VesselTrackStepConfig {
String sql = """
INSERT INTO signal.t_area_vessel_tracks (
area_id, sig_src_cd, target_id, time_bucket,
area_id, mmsi, time_bucket,
distance_nm, avg_speed, point_count, metrics
) VALUES (?, ?, ?, ?, ?, ?, ?, ?::jsonb)
ON CONFLICT (area_id, sig_src_cd, target_id, time_bucket) DO NOTHING
) VALUES (?, ?, ?, ?, ?, ?, ?::jsonb)
ON CONFLICT (area_id, mmsi, time_bucket) DO NOTHING
""";
List<Object[]> args = allClippedTracks.stream()
.map(track -> new Object[] {
track.getAreaId(),
track.getSigSrcCd(),
track.getTargetId(),
track.getMmsi(),
Timestamp.valueOf(track.getTimeBucket()),
track.getDistanceNm(),
track.getAvgSpeed(),
@ -422,12 +418,11 @@ public class VesselTrackStepConfig {
SELECT
haegu_no,
time_bucket,
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list
@ -466,12 +461,11 @@ public class VesselTrackStepConfig {
SELECT
area_id,
time_bucket,
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list

파일 보기

@ -0,0 +1,52 @@
package gc.mda.signal_batch.batch.listener;
import gc.mda.signal_batch.domain.gis.cache.AreaBoundaryCache;
import gc.mda.signal_batch.domain.vessel.service.VesselPreviousBucketCache;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobExecutionListener;
import org.springframework.batch.core.annotation.AfterJob;
import org.springframework.batch.core.annotation.BeforeJob;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
/**
* 캐시 기반 Track Job 리스너
*
* 기존 VesselTrackDataJobListener 대체:
* - collectDB 데이터 로드 제거 (AisTargetCacheManager로 대체)
* - Area/Haegu 경계 캐시 갱신 유지
* - 이전 버킷 캐시 Fallback 플래그 리셋 유지
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class CacheBasedTrackJobListener implements JobExecutionListener {
private final AreaBoundaryCache areaBoundaryCache;
private final VesselPreviousBucketCache previousBucketCache;
@BeforeJob
public void beforeJob(JobExecution jobExecution) {
// Area/Haegu 경계 캐시 갱신
areaBoundaryCache.refresh();
log.info("Refreshed area boundary cache");
// 이전 버킷 캐시 Fallback 플래그 리셋
previousBucketCache.resetFallbackFlag();
log.info("Reset previous bucket cache fallback flag");
log.info("Cache-based track job started: startTime={}, endTime={}",
jobExecution.getJobParameters().getString("startTime"),
jobExecution.getJobParameters().getString("endTime"));
}
@AfterJob
public void afterJob(JobExecution jobExecution) {
// DB 조회 통계 출력
previousBucketCache.logJobStatistics();
log.debug("Cache-based track job completed");
}
}

파일 보기

@ -29,12 +29,11 @@ public class AbnormalTrackDetector {
// 물리적 한계값 (매우 관대하게 설정)
@SuppressWarnings("unused")
private static final double VESSEL_PHYSICAL_LIMIT_KNOTS = 100.0; // 선박 물리적 한계
@SuppressWarnings("unused")
private static final double AIRCRAFT_PHYSICAL_LIMIT_KNOTS = 600.0; // 항공기 물리적 한계
// 항공기 물리적 한계 S&P AIS API 전환으로 미사용 (선박 전용)
// 명백한 비정상만 검출하기 위한 임계값
private static final double VESSEL_ABNORMAL_SPEED_KNOTS = 500.0; // 선박 비정상 속도 (매우 관대)
private static final double AIRCRAFT_ABNORMAL_SPEED_KNOTS = 800.0; // 항공기 비정상 속도
// 항공기 비정상 속도 S&P AIS API 전환으로 미사용 (선박 전용)
// 시간별 거리 임계값 (제곱근 스케일링 적용)
private static final double BASE_DISTANCE_5MIN_NM = 20.0; // 5분간 기준 거리 (2배로 증가)
@ -46,7 +45,7 @@ public class AbnormalTrackDetector {
private static final long MIN_GAP_FOR_RELAXED_CHECK = 30; // 30분 이상 gap은 완화된 검사
private static final double EARTH_RADIUS_NM = 3440.065;
private static final String AIRCRAFT_SIG_SRC_CD = "000019";
// S&P AIS API는 선박 전용 항공기 구분 불필요
@Data
@Builder
@ -130,9 +129,8 @@ public class AbnormalTrackDetector {
return buildNormalResult(track);
}
// Hourly/Daily에서는 선박/항공기 구분하여 제외
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(track.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0; // 항공기 300, 선박 100
// S&P AIS API는 선박 전용 선박 기준 속도 제한
double speedLimit = 100.0;
boolean shouldExclude = abnormalSegments.stream()
.anyMatch(seg -> seg.getActualValue() > speedLimit);
@ -185,8 +183,7 @@ public class AbnormalTrackDetector {
private List<AbnormalSegment> checkAggregatedMetricsLenient(VesselTrack track) {
List<AbnormalSegment> abnormalSegments = new ArrayList<>();
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(track.getSigSrcCd());
double speedLimit = isAircraft ? AIRCRAFT_ABNORMAL_SPEED_KNOTS : VESSEL_ABNORMAL_SPEED_KNOTS;
double speedLimit = VESSEL_ABNORMAL_SPEED_KNOTS;
// 평균속도가 명백히 비정상인 경우만 검출
if (track.getAvgSpeed() != null && track.getAvgSpeed().doubleValue() > speedLimit) {
@ -259,8 +256,7 @@ public class AbnormalTrackDetector {
double timeScale = Math.sqrt(durationMinutes / 5.0);
double distanceThreshold = BASE_DISTANCE_5MIN_NM * timeScale * 3.0; // 3배 여유
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(currentTrack.getSigSrcCd());
double speedLimit = isAircraft ? AIRCRAFT_ABNORMAL_SPEED_KNOTS : VESSEL_ABNORMAL_SPEED_KNOTS;
double speedLimit = VESSEL_ABNORMAL_SPEED_KNOTS;
// 매우 명백한 비정상만 검출
if (impliedSpeed > speedLimit && distance > distanceThreshold) {
@ -345,9 +341,8 @@ public class AbnormalTrackDetector {
double impliedSpeed = (distance * 60.0) / durationMinutes;
// Hourly/Daily는 선박/항공기 구분하여 처리
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(currentTrack.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0;
// S&P AIS API는 선박 전용 항공기 구분 불필요
double speedLimit = 100.0;
if (impliedSpeed > speedLimit) {
Map<String, Object> details = new HashMap<>();

파일 보기

@ -1,190 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.AreaStatistics;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.VesselMovement;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.AfterStep;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/**
* Area Statistics를 위한 누적 프로세서
* 전체 데이터를 메모리에 누적한 Step 종료 번에 집계
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@StepScope
@RequiredArgsConstructor
public class AccumulatingAreaProcessor implements ItemProcessor<VesselData, AreaStatistics> {
private final AreaStatisticsProcessor areaStatisticsProcessor;
@Value("#{jobParameters['timeBucketMinutes']}")
private Integer timeBucketMinutes;
// area_id + time_bucket별 선박 데이터 누적
private final Map<String, List<VesselData>> dataAccumulator = new ConcurrentHashMap<>();
// 처리 통계
private long processedCount = 0;
private long skippedCount = 0;
@BeforeStep
public void beforeStep(StepExecution stepExecution) {
int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
log.info("AccumulatingAreaProcessor initialized with timeBucket: {} minutes", bucketMinutes);
dataAccumulator.clear();
processedCount = 0;
skippedCount = 0;
}
@Override
public AreaStatistics process(VesselData item) throws Exception {
if (!item.isValidPosition()) {
skippedCount++;
return null;
}
// 메모리에서 속한 구역 찾기
List<String> areaIds = areaStatisticsProcessor.findAreasForPointInMemory(
item.getLat(), item.getLon()
);
if (areaIds.isEmpty()) {
return null;
}
// time bucket 계산
int bucketSize = timeBucketMinutes != null ? timeBucketMinutes : 5;
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketSize) * bucketSize);
// area에 대해 데이터 누적
for (String areaId : areaIds) {
String key = areaId + "||" + bucket.toString(); // 구분자 변경
dataAccumulator.computeIfAbsent(key, k -> new ArrayList<>()).add(item);
}
processedCount++;
// null 반환으로 개별 출력 방지
return null;
}
@AfterStep
public void afterStep(StepExecution stepExecution) {
log.info("Processing accumulated data for {} area-timebucket combinations",
dataAccumulator.size());
log.info("Processed: {}, Skipped: {}", processedCount, skippedCount);
if (dataAccumulator.isEmpty()) {
return;
}
// 누적된 데이터를 기반으로 통계 계산
List<AreaStatistics> allStatistics = new ArrayList<>();
dataAccumulator.forEach((key, vessels) -> {
String[] parts = key.split("\\|\\|", 2); // || 구분자 사용
if (parts.length != 2) {
log.error("Invalid key format: {}", key);
return;
}
String areaId = parts[0];
LocalDateTime timeBucket = LocalDateTime.parse(parts[1]);
AreaStatistics stats = new AreaStatistics(areaId, timeBucket);
Map<String, VesselMovement> vesselMovements = new HashMap<>();
// 선박별로 movement 정보 계산
Map<String, List<VesselData>> vesselGroups = new HashMap<>();
for (VesselData vessel : vessels) {
vesselGroups.computeIfAbsent(vessel.getVesselKey(), k -> new ArrayList<>())
.add(vessel);
}
vesselGroups.forEach((vesselKey, vesselDataList) -> {
// 시간순 정렬
vesselDataList.sort(Comparator.comparing(VesselData::getMessageTime));
VesselMovement movement = new VesselMovement();
movement.setVesselKey(vesselKey);
movement.setEnterTime(vesselDataList.get(0).getMessageTime());
movement.setExitTime(vesselDataList.get(vesselDataList.size() - 1).getMessageTime());
movement.setPointCount(vesselDataList.size());
// 평균 속도 계산
double totalSpeed = 0;
int speedCount = 0;
for (VesselData vd : vesselDataList) {
if (vd.getSog() != null) {
totalSpeed += vd.getSog().doubleValue();
speedCount++;
}
}
if (speedCount > 0) {
movement.setAvgSpeed(BigDecimal.valueOf(totalSpeed / speedCount)
.setScale(2, BigDecimal.ROUND_HALF_UP));
} else {
movement.setAvgSpeed(BigDecimal.ZERO);
}
// 정류/통과 구분 (10분 이상 체류 정류)
long stayMinutes = ChronoUnit.MINUTES.between(
movement.getEnterTime(), movement.getExitTime()
);
if (stayMinutes > 10) {
stats.getStationaryVessels().put(vesselKey, movement);
} else {
stats.getTransitVessels().put(vesselKey, movement);
}
vesselMovements.put(vesselKey, movement);
});
// 통계 최종 계산
stats.setVesselCount(vesselMovements.size());
stats.setInCount(vesselMovements.size()); // 진입 선박
stats.setOutCount(0); // 추후 로직 개선 필요
// 전체 평균 속도
List<BigDecimal> allSpeeds = new ArrayList<>();
vesselMovements.values().stream()
.map(VesselMovement::getAvgSpeed)
.filter(Objects::nonNull)
.forEach(allSpeeds::add);
if (!allSpeeds.isEmpty()) {
BigDecimal totalSpeed = allSpeeds.stream()
.reduce(BigDecimal.ZERO, BigDecimal::add);
stats.setAvgSog(totalSpeed.divide(
BigDecimal.valueOf(allSpeeds.size()), 2, BigDecimal.ROUND_HALF_UP));
} else {
stats.setAvgSog(BigDecimal.ZERO);
}
allStatistics.add(stats);
});
// StepExecution context에 결과 저장
stepExecution.getExecutionContext().put("areaStatistics", allStatistics);
log.info("Calculated statistics for {} areas", allStatistics.size());
}
}

파일 보기

@ -1,206 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.global.util.HaeguGeoUtils;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.AfterStep;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/**
* 전체 데이터를 누적하여 집계하는 프로세서
* Step 실행 모든 데이터를 메모리에 누적하고, Step 완료 번에 출력
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@StepScope
@RequiredArgsConstructor
public class AccumulatingTileProcessor implements ItemProcessor<VesselData, TileStatistics> {
private final HaeguGeoUtils geoUtils;
@Value("#{jobParameters['tileLevel']}")
private Integer tileLevel;
@Value("#{jobParameters['timeBucketMinutes']}")
private Integer timeBucketMinutes;
// 전체 집계를 위한 누적
private final Map<String, TileStatistics> accumulator = new ConcurrentHashMap<>();
// 처리된 레코드 추적
private long processedCount = 0;
private long skippedCount = 0;
@BeforeStep
public void beforeStep(StepExecution stepExecution) {
int level = (tileLevel != null) ? tileLevel : 1;
int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
log.info("Starting AccumulatingTileProcessor - tileLevel: {}, timeBucket: {} minutes",
level, bucketMinutes);
// 초기화
accumulator.clear();
processedCount = 0;
skippedCount = 0;
}
@Override
public TileStatistics process(VesselData item) throws Exception {
if (item == null || !item.isValidPosition()) {
skippedCount++;
return null;
}
processedCount++;
int level = (tileLevel != null) ? tileLevel : 1;
int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketMinutes) * bucketMinutes);
// Level 0 (대해구) 처리
if (level >= 0) {
processLevel0(item, bucket);
}
// Level 1 (소해구) 처리
if (level >= 1) {
processLevel1(item, bucket);
}
// 10000건마다 진행 상황 로그
if (processedCount % 10000 == 0) {
log.debug("Processed {} records, accumulated {} tiles",
processedCount, accumulator.size());
}
// null 반환 - 실제 출력은 AfterStep에서 수행
return null;
}
private void processLevel0(VesselData item, LocalDateTime bucket) {
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String key = generateKey(level0Info.tileId, 0, bucket);
accumulator.compute(key, (k, existing) -> {
if (existing == null) {
existing = TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build();
}
existing.addVesselData(item);
return existing;
});
}
}
private void processLevel1(VesselData item, LocalDateTime bucket) {
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String key = generateKey(level1Info.tileId, 1, bucket);
accumulator.compute(key, (k, existing) -> {
if (existing == null) {
existing = TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build();
}
existing.addVesselData(item);
return existing;
});
}
}
private String generateKey(String tileId, int tileLevel, LocalDateTime timeBucket) {
return String.format("%s|%d|%s", tileId, tileLevel, timeBucket);
}
@AfterStep
public void afterStep(StepExecution stepExecution) {
log.info("AccumulatingTileProcessor completed - processed: {}, skipped: {}, tiles: {}",
processedCount, skippedCount, accumulator.size());
// 밀도 계산
accumulator.values().forEach(this::calculateDensity);
// 메트릭 저장
stepExecution.getExecutionContext().putLong("totalProcessed", processedCount);
stepExecution.getExecutionContext().putLong("totalSkipped", skippedCount);
stepExecution.getExecutionContext().putInt("totalTiles", accumulator.size());
// 위치에서 바로 DB에 저장하면 안됨 - StepListener에서 처리해야
log.info("Accumulated {} tiles ready for writing", accumulator.size());
}
private void calculateDensity(TileStatistics stats) {
if (stats.getVesselCount() == null || stats.getVesselCount() == 0) {
stats.setVesselDensity(BigDecimal.ZERO);
return;
}
double tileArea = geoUtils.getTileArea(stats.getTileId());
if (tileArea > 0) {
BigDecimal density = BigDecimal.valueOf(stats.getVesselCount())
.divide(BigDecimal.valueOf(tileArea), 6, BigDecimal.ROUND_HALF_UP);
stats.setVesselDensity(density);
} else {
stats.setVesselDensity(BigDecimal.ZERO);
}
}
/**
* 누적된 결과 반환 (테스트용)
*/
public List<TileStatistics> getAccumulatedResults() {
log.info("[AccumulatingTileProcessor] getAccumulatedResults called - size: {}", accumulator.size());
return new ArrayList<>(accumulator.values());
}
/**
* 누적 데이터 초기화
*/
public void clear() {
accumulator.clear();
processedCount = 0;
skippedCount = 0;
}
}

파일 보기

@ -0,0 +1,85 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetDto;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.stereotype.Component;
import java.time.OffsetDateTime;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
/**
* AIS Target DTO Entity 변환 Processor
*
* - 타임스탬프 파싱 (ISO 8601)
* - 유효성 필터링 (MMSI, Lat, Lon 필수)
* - gc-signal-batch에서는 mmsi를 String으로 처리
*/
@Slf4j
@Component
public class AisTargetDataProcessor implements ItemProcessor<AisTargetDto, AisTargetEntity> {
private static final DateTimeFormatter ISO_FORMATTER = DateTimeFormatter.ISO_DATE_TIME;
@Override
public AisTargetEntity process(AisTargetDto dto) {
// 유효성 검사: MMSI와 위치 정보 필수
if (dto.getMmsi() == null || dto.getMmsi().isBlank()
|| dto.getLat() == null || dto.getLon() == null) {
log.debug("유효하지 않은 데이터 스킵 - MMSI: {}, Lat: {}, Lon: {}",
dto.getMmsi(), dto.getLat(), dto.getLon());
return null;
}
// MessageTimestamp 파싱
OffsetDateTime messageTimestamp = parseTimestamp(dto.getMessageTimestamp());
if (messageTimestamp == null) {
log.debug("MessageTimestamp 파싱 실패 - MMSI: {}, Timestamp: {}",
dto.getMmsi(), dto.getMessageTimestamp());
return null;
}
return AisTargetEntity.builder()
.mmsi(dto.getMmsi())
.imo(dto.getImo())
.name(dto.getName())
.callsign(dto.getCallsign())
.vesselType(dto.getVesselType())
.extraInfo(dto.getExtraInfo())
.lat(dto.getLat())
.lon(dto.getLon())
.heading(dto.getHeading())
.sog(dto.getSog())
.cog(dto.getCog())
.rot(dto.getRot())
.length(dto.getLength())
.width(dto.getWidth())
.draught(dto.getDraught())
.destination(dto.getDestination())
.eta(parseEta(dto.getEta()))
.status(dto.getStatus())
.messageTimestamp(messageTimestamp)
.build();
}
private OffsetDateTime parseTimestamp(String timestamp) {
if (timestamp == null || timestamp.isEmpty()) {
return null;
}
try {
return OffsetDateTime.parse(timestamp, ISO_FORMATTER);
} catch (DateTimeParseException e) {
log.trace("타임스탬프 파싱 실패: {}", timestamp);
return null;
}
}
private OffsetDateTime parseEta(String eta) {
if (eta == null || eta.isEmpty() || "9999-12-31T23:59:59Z".equals(eta)) {
return null;
}
return parseTimestamp(eta);
}
}

파일 보기

@ -1,333 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.global.util.DataSourceLogger;
import lombok.Data;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*;
import org.locationtech.jts.io.WKTReader;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import jakarta.annotation.PostConstruct;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class AreaStatisticsProcessor {
@Qualifier("queryJdbcTemplate")
private final JdbcTemplate queryJdbcTemplate;
@Qualifier("queryDataSource")
private final DataSource queryDataSource;
// 메모리에 구역 정보 캐싱
private final Map<String, AreaInfo> areaCache = new ConcurrentHashMap<>();
private final List<AreaInfo> areaList = new ArrayList<>();
// JTS 객체들
private final GeometryFactory geometryFactory = new GeometryFactory(new PrecisionModel(), 4326);
private final WKTReader wktReader = new WKTReader(geometryFactory);
@PostConstruct
public void init() {
log.info("========== AreaStatisticsProcessor Initialization ==========");
DataSourceLogger.logJdbcTemplateInfo("AreaStatisticsProcessor", queryJdbcTemplate);
// t_areas 테이블 존재 확인
boolean tableExists = DataSourceLogger.checkTableExists(
"AreaStatisticsProcessor", queryJdbcTemplate, "signal", "t_areas"
);
if (!tableExists) {
log.error("CRITICAL: Table signal.t_areas does not exist in query database!");
log.error("Please run: scripts/sql/create-query-db-schema.sql on the query database");
} else {
// 초기화 구역 정보 로드
loadAreas();
}
log.info("========== End of Initialization ==========");
}
@Data
public static class AreaInfo {
private String areaId;
private String areaName;
private String areaType;
private String geomWkt;
private Geometry geometry; // JTS Geometry 객체
private Envelope envelope; // Bounding Box for quick filtering
}
@Data
public static class AreaStatistics implements java.io.Serializable {
private String areaId;
private LocalDateTime timeBucket;
private Integer vesselCount;
private Integer inCount;
private Integer outCount;
private Map<String, VesselMovement> transitVessels;
private Map<String, VesselMovement> stationaryVessels;
private BigDecimal avgSog;
private LocalDateTime createdAt;
public AreaStatistics(String areaId, LocalDateTime timeBucket) {
this.areaId = areaId;
this.timeBucket = timeBucket;
this.vesselCount = 0;
this.inCount = 0;
this.outCount = 0;
this.transitVessels = new HashMap<>();
this.stationaryVessels = new HashMap<>();
this.avgSog = BigDecimal.ZERO;
}
}
@Data
public static class VesselMovement implements java.io.Serializable {
private String vesselKey;
private LocalDateTime enterTime;
private LocalDateTime exitTime;
private BigDecimal avgSpeed;
private Integer pointCount;
}
@StepScope
public ItemProcessor<List<VesselData>, List<AreaStatistics>> batchProcessor() {
return batchProcessor(null);
}
@StepScope
public ItemProcessor<List<VesselData>, List<AreaStatistics>> batchProcessor(
@Value("#{jobParameters['timeBucketMinutes']}") Integer bucketMinutes) {
return items -> {
// 구역 정보가 없으면 결과 반환
if (areaList.isEmpty()) {
log.warn("No areas loaded, skipping area statistics processing");
return new ArrayList<>();
}
Map<String, AreaStatistics> statsMap = new HashMap<>();
Map<String, Map<String, VesselMovement>> vesselTracker = new HashMap<>();
for (VesselData item : items) {
if (!item.isValidPosition()) {
continue;
}
// 메모리에서 속한 구역 찾기 (DB 쿼리 없음!)
List<String> areaIds = findAreasForPointInMemory(item.getLat(), item.getLon());
int bucketSize = bucketMinutes != null ? bucketMinutes : 5; // 5분 단위로 변경
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketSize) * bucketSize);
for (String areaId : areaIds) {
String statsKey = areaId + "_" + bucket.toString();
AreaStatistics stats = statsMap.computeIfAbsent(statsKey,
k -> new AreaStatistics(areaId, bucket)
);
// 선박 이동 추적
String vesselKey = item.getVesselKey();
Map<String, VesselMovement> areaVessels = vesselTracker.computeIfAbsent(
areaId, k -> new HashMap<>()
);
VesselMovement movement = areaVessels.computeIfAbsent(vesselKey,
k -> {
VesselMovement vm = new VesselMovement();
vm.setVesselKey(vesselKey);
vm.setEnterTime(item.getMessageTime());
vm.setPointCount(0);
vm.setAvgSpeed(BigDecimal.ZERO);
stats.setInCount(stats.getInCount() + 1);
return vm;
}
);
movement.setExitTime(item.getMessageTime());
movement.setPointCount(movement.getPointCount() + 1);
// 평균 속도 계산
if (item.getSog() != null) {
BigDecimal currentTotal = movement.getAvgSpeed()
.multiply(BigDecimal.valueOf(movement.getPointCount() - 1));
movement.setAvgSpeed(
currentTotal.add(item.getSog())
.divide(BigDecimal.valueOf(movement.getPointCount()), 2, BigDecimal.ROUND_HALF_UP)
);
}
// 정류/통과 구분 (10분 이상 체류 정류)
long stayMinutes = ChronoUnit.MINUTES.between(
movement.getEnterTime(), movement.getExitTime()
);
if (stayMinutes > 10) {
stats.getStationaryVessels().put(vesselKey, movement);
} else {
stats.getTransitVessels().put(vesselKey, movement);
}
}
}
// 통계 최종 계산
statsMap.values().forEach(stats -> {
stats.setVesselCount(
stats.getTransitVessels().size() + stats.getStationaryVessels().size()
);
// 평균 속도 계산
List<BigDecimal> allSpeeds = new ArrayList<>();
stats.getTransitVessels().values().stream()
.map(VesselMovement::getAvgSpeed)
.filter(Objects::nonNull)
.forEach(allSpeeds::add);
stats.getStationaryVessels().values().stream()
.map(VesselMovement::getAvgSpeed)
.filter(Objects::nonNull)
.forEach(allSpeeds::add);
if (!allSpeeds.isEmpty()) {
BigDecimal totalSpeed = allSpeeds.stream()
.reduce(BigDecimal.ZERO, BigDecimal::add);
stats.setAvgSog(
totalSpeed.divide(BigDecimal.valueOf(allSpeeds.size()), 2, BigDecimal.ROUND_HALF_UP)
);
}
});
return new ArrayList<>(statsMap.values());
};
}
private void loadAreas() {
log.info("Loading areas from query database...");
DataSourceLogger.logJdbcTemplateInfo("AreaStatisticsProcessor.loadAreas", queryJdbcTemplate);
String sql = "SELECT area_id, area_name, area_type, public.ST_AsText(area_geom) as geom_wkt FROM signal.t_areas";
try {
boolean exists = DataSourceLogger.checkTableExists(
"AreaStatisticsProcessor.loadAreas", queryJdbcTemplate, "signal", "t_areas"
);
if (exists) {
List<AreaInfo> areas = queryJdbcTemplate.query(sql, (rs, rowNum) -> {
AreaInfo area = new AreaInfo();
area.setAreaId(rs.getString("area_id"));
area.setAreaName(rs.getString("area_name"));
area.setAreaType(rs.getString("area_type"));
area.setGeomWkt(rs.getString("geom_wkt"));
// WKT를 JTS Geometry로 변환
try {
Geometry geom = wktReader.read(area.getGeomWkt());
area.setGeometry(geom);
area.setEnvelope(geom.getEnvelopeInternal());
} catch (Exception e) {
log.error("Failed to parse WKT for area {}: {}", area.getAreaId(), e.getMessage());
}
return area;
});
areas.forEach(area -> {
areaCache.put(area.getAreaId(), area);
areaList.add(area);
});
log.info("Successfully loaded {} areas into memory cache", areas.size());
log.info("Area types: {}", areas.stream()
.collect(java.util.stream.Collectors.groupingBy(
AreaInfo::getAreaType,
java.util.stream.Collectors.counting()
)));
} else {
log.error("Cannot load areas - table signal.t_areas does not exist!");
}
} catch (Exception e) {
log.error("Failed to load areas", e);
}
}
/**
* 메모리에서 포인트가 속한 구역 찾기 (DB 쿼리 없음!)
*/
public List<String> findAreasForPointInMemory(double lat, double lon) {
// JTS Point 생성
Point point = geometryFactory.createPoint(new Coordinate(lon, lat));
return areaList.parallelStream()
.filter(area -> area.getGeometry() != null)
.filter(area -> area.getEnvelope().contains(lon, lat))
.filter(area -> {
try {
return area.getGeometry().contains(point);
} catch (Exception e) {
return false;
}
})
.map(AreaInfo::getAreaId)
.collect(Collectors.toList());
// List<String> areaIds = new ArrayList<>();
// // 모든 구역에 대해 contains 검사
// for (AreaInfo area : areaList) {
// if (area.getGeometry() == null) {
// continue;
// }
//
// // 1. Envelope(Bounding Box) 빠른 필터링
// if (!area.getEnvelope().contains(lon, lat)) {
// continue;
// }
//
// // 2. 정확한 contains 검사
// try {
// if (area.getGeometry().contains(point)) {
// areaIds.add(area.getAreaId());
// }
// } catch (Exception e) {
// log.debug("Error checking contains for area {}: {}", area.getAreaId(), e.getMessage());
// }
// }
//
// return areaIds;
}
/**
* 캐시 상태 조회 (디버깅/모니터링용)
*/
public Map<String, Object> getCacheStats() {
Map<String, Object> stats = new HashMap<>();
stats.put("loadedAreas", areaList.size());
stats.put("areaTypes", areaList.stream()
.collect(java.util.stream.Collectors.groupingBy(
AreaInfo::getAreaType,
java.util.stream.Collectors.counting()
)));
return stats;
}
}

파일 보기

@ -46,8 +46,8 @@ public abstract class BaseTrackProcessorWithAbnormalDetection implements ItemPro
AbnormalDetectionResult result = abnormalTrackDetector.detectBucketTransitionOnly(track, previousTrack);
if (result.hasAbnormalities()) {
log.debug("Abnormal track detected for vessel {}/{} at {}: {}",
track.getSigSrcCd(), track.getTargetId(), track.getTimeBucket(),
log.debug("Abnormal track detected for vessel {} at {}: {}",
track.getMmsi(), track.getTimeBucket(),
result.getAbnormalSegments().size());
}
@ -60,12 +60,11 @@ public abstract class BaseTrackProcessorWithAbnormalDetection implements ItemPro
protected VesselTrack getPreviousBucketLastTrack(VesselTrack.VesselKey vesselKey) {
try {
String sql = """
SELECT sig_src_cd, target_id, time_bucket,
SELECT mmsi, time_bucket,
end_position,
public.ST_AsText(public.ST_LineSubstring(track_geom, 0.9, 1.0)) as last_segment
FROM %s
WHERE sig_src_cd = ?
AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ?
AND time_bucket < ?
ORDER BY time_bucket DESC
@ -83,14 +82,13 @@ public abstract class BaseTrackProcessorWithAbnormalDetection implements ItemPro
return jdbcTemplate.queryForObject(sql,
(rs, rowNum) -> {
return VesselTrack.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(rs.getTimestamp("time_bucket").toLocalDateTime())
.trackGeom(rs.getString("last_segment"))
.endPosition(parseEndPosition(rs.getString("end_position")))
.build();
},
vesselKey.getSigSrcCd(), vesselKey.getTargetId(), previousBucketTimestamp, currentBucketTimestamp
vesselKey.getMmsi(), previousBucketTimestamp, currentBucketTimestamp
);
} catch (Exception e) {
log.debug("No previous bucket track found for vessel {}", vesselKey);

파일 보기

@ -39,8 +39,7 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_hourly
WHERE sig_src_cd = ?
AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ?
AND time_bucket < ?
AND track_geom IS NOT NULL
@ -49,28 +48,26 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
mmsi,
string_agg(
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
GROUP BY mmsi
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
mc.mmsi,
TO_TIMESTAMP(?, 'YYYY-MM-DD HH24:MI:SS') as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')') as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
(SELECT MAX(max_speed) FROM ordered_tracks WHERE mmsi = mc.mmsi) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE mmsi = mc.mmsi) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE mmsi = mc.mmsi) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE mmsi = mc.mmsi) as end_time,
(SELECT start_position FROM ordered_tracks WHERE mmsi = mc.mmsi ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE mmsi = mc.mmsi ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
@ -89,8 +86,7 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
mmsi,
time_bucket,
merged_geom,
total_distance,
@ -112,13 +108,12 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
LocalDateTime startTime = dayBucket;
LocalDateTime endTime = dayBucket.plusDays(1);
// Convert to java.sql.Timestamp for proper PostgreSQL type handling
Timestamp startTimestamp = Timestamp.valueOf(startTime);
Timestamp endTimestamp = Timestamp.valueOf(endTime);
Timestamp dayBucketTimestamp = Timestamp.valueOf(dayBucket);
log.debug("DailyTrackProcessor params - sig_src_cd: {}, target_id: {}, startTime: {}, endTime: {}, dayBucket: {}",
vesselKey.getSigSrcCd(), vesselKey.getTargetId(), startTimestamp, endTimestamp, dayBucketTimestamp);
log.debug("DailyTrackProcessor params - mmsi: {}, startTime: {}, endTime: {}, dayBucket: {}",
vesselKey.getMmsi(), startTimestamp, endTimestamp, dayBucketTimestamp);
try {
return jdbcTemplate.queryForObject(sql,
@ -129,22 +124,21 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
throw new RuntimeException("Failed to build daily track", e);
}
},
vesselKey.getSigSrcCd(), vesselKey.getTargetId(),
vesselKey.getMmsi(),
startTimestamp, endTimestamp, dayBucketTimestamp
);
} catch (org.springframework.dao.EmptyResultDataAccessException e) {
log.warn("No hourly data found for vessel {} in time range {}-{}, skipping daily aggregation",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), startTimestamp, endTimestamp);
vesselKey.getMmsi(), startTimestamp, endTimestamp);
return null;
} catch (Exception e) {
log.error("Failed to process daily track for vessel {}: {}",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), e.getMessage(), e);
vesselKey.getMmsi(), e.getMessage(), e);
return null;
}
}
private VesselTrack buildDailyTrack(ResultSet rs, LocalDateTime dayBucket) throws Exception {
// Start/End position 추출
VesselTrack.TrackPosition startPos = null;
VesselTrack.TrackPosition endPos = null;
@ -154,30 +148,23 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
if (startPosJson != null) {
startPos = parseTrackPosition(startPosJson);
}
if (endPosJson != null) {
endPos = parseTrackPosition(endPosJson);
}
// M값은 이미 SQL에서 재계산됨
String dailyLineStringM = rs.getString("geom_text");
// 일별 궤적 간소화 (20m 이내 생략, 최대 30분 간격)
String simplifiedLineStringM = TrackSimplificationUtils.simplifyDailyTrack(dailyLineStringM);
// 간소화 통계 로깅
if (!dailyLineStringM.equals(simplifiedLineStringM)) {
TrackSimplificationUtils.SimplificationStats stats =
TrackSimplificationUtils.getSimplificationStats(dailyLineStringM, simplifiedLineStringM);
log.debug("일별 궤적 간소화 - vessel: {}/{}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("sig_src_cd"), rs.getString("target_id"),
log.debug("일별 궤적 간소화 - vessel: {}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("mmsi"),
stats.originalPoints, stats.simplifiedPoints, (int)stats.reductionRate);
}
// track_geom만 사용
return VesselTrack.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(dayBucket)
.trackGeom(simplifiedLineStringM)
.distanceNm(rs.getBigDecimal("total_distance"))

파일 보기

@ -36,8 +36,7 @@ public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = ?
AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ?
AND time_bucket < ?
AND track_geom IS NOT NULL
@ -46,28 +45,26 @@ public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
mmsi,
string_agg(
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
GROUP BY mmsi
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
mc.mmsi,
TO_TIMESTAMP(?, 'YYYY-MM-DD HH24:MI:SS') as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')') as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
(SELECT MAX(max_speed) FROM ordered_tracks WHERE mmsi = mc.mmsi) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE mmsi = mc.mmsi) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE mmsi = mc.mmsi) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE mmsi = mc.mmsi) as end_time,
(SELECT start_position FROM ordered_tracks WHERE mmsi = mc.mmsi ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE mmsi = mc.mmsi ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
@ -86,8 +83,7 @@ public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
mmsi,
time_bucket,
merged_geom,
total_distance,
@ -109,13 +105,12 @@ public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey
LocalDateTime startTime = hourBucket;
LocalDateTime endTime = hourBucket.plusHours(1);
// Convert to java.sql.Timestamp for proper PostgreSQL type handling
Timestamp startTimestamp = Timestamp.valueOf(startTime);
Timestamp endTimestamp = Timestamp.valueOf(endTime);
Timestamp hourBucketTimestamp = Timestamp.valueOf(hourBucket);
log.debug("HourlyTrackProcessor params - sig_src_cd: {}, target_id: {}, startTime: {}, endTime: {}, hourBucket: {}",
vesselKey.getSigSrcCd(), vesselKey.getTargetId(), startTimestamp, endTimestamp, hourBucketTimestamp);
log.debug("HourlyTrackProcessor params - mmsi: {}, startTime: {}, endTime: {}, hourBucket: {}",
vesselKey.getMmsi(), startTimestamp, endTimestamp, hourBucketTimestamp);
try {
return jdbcTemplate.queryForObject(sql,
@ -126,22 +121,21 @@ public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey
throw new RuntimeException("Failed to build hourly track", e);
}
},
vesselKey.getSigSrcCd(), vesselKey.getTargetId(),
vesselKey.getMmsi(),
startTimestamp, endTimestamp, hourBucketTimestamp
);
} catch (org.springframework.dao.EmptyResultDataAccessException e) {
log.warn("No 5min data found for vessel {} in time range {}-{}, skipping hourly aggregation",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), startTimestamp, endTimestamp);
vesselKey.getMmsi(), startTimestamp, endTimestamp);
return null;
} catch (Exception e) {
log.error("Failed to process hourly track for vessel {}: {}",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), e.getMessage(), e);
vesselKey.getMmsi(), e.getMessage(), e);
return null;
}
}
private VesselTrack buildHourlyTrack(ResultSet rs, LocalDateTime hourBucket) throws Exception {
// Start/End position 추출
VesselTrack.TrackPosition startPos = null;
VesselTrack.TrackPosition endPos = null;
@ -151,30 +145,23 @@ public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey
if (startPosJson != null) {
startPos = parseTrackPosition(startPosJson);
}
if (endPosJson != null) {
endPos = parseTrackPosition(endPosJson);
}
// M값은 이미 SQL에서 재계산됨
String hourlyLineStringM = rs.getString("geom_text");
// 이동이 거의 없는 포인트 간소화 (10m 이내 생략, 최대 10분 간격)
String simplifiedLineStringM = TrackSimplificationUtils.simplifyHourlyTrack(hourlyLineStringM);
// 간소화 통계 로깅
if (!hourlyLineStringM.equals(simplifiedLineStringM)) {
TrackSimplificationUtils.SimplificationStats stats =
TrackSimplificationUtils.getSimplificationStats(hourlyLineStringM, simplifiedLineStringM);
log.debug("시간별 궤적 간소화 - vessel: {}/{}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("sig_src_cd"), rs.getString("target_id"),
log.debug("시간별 궤적 간소화 - vessel: {}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("mmsi"),
stats.originalPoints, stats.simplifiedPoints, (int)stats.reductionRate);
}
// track_geom만 사용
return VesselTrack.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(hourBucket)
.trackGeom(simplifiedLineStringM)
.distanceNm(rs.getBigDecimal("total_distance"))

파일 보기

@ -1,60 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.domain.vessel.model.VesselLatestPosition;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.time.LocalDateTime;
import java.util.concurrent.ConcurrentHashMap;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class LatestPositionProcessor {
@StepScope
public ItemProcessor<VesselData, VesselLatestPosition> processor() {
// 청크 내에서 최신 데이터만 유지
ConcurrentHashMap<String, VesselLatestPosition> latestMap = new ConcurrentHashMap<>();
return item -> {
if (!item.isValidPosition()) {
log.debug("Invalid position for vessel: {}", item.getVesselKey());
return null;
}
String key = item.getVesselKey();
VesselLatestPosition current = VesselLatestPosition.fromVesselData(item);
VesselLatestPosition existing = latestMap.get(key);
if (existing == null || current.getLastUpdate().isAfter(existing.getLastUpdate())) {
latestMap.put(key, current);
return current;
}
return null;
};
}
@StepScope
public ItemProcessor<VesselData, VesselLatestPosition> filteringProcessor(
LocalDateTime cutoffTime) {
return item -> {
// 특정 시간 이후 데이터만 처리
if (item.getMessageTime().isBefore(cutoffTime)) {
return null;
}
if (!item.isValidPosition()) {
return null;
}
return VesselLatestPosition.fromVesselData(item);
};
}
}

파일 보기

@ -1,291 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.global.util.HaeguGeoUtils;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
@Slf4j
@Configuration
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class TileAggregationProcessor {
private final HaeguGeoUtils geoUtils;
/**
* 타일 레벨과 시간 버킷에 따른 배치 프로세서 생성
*/
public ItemProcessor<List<VesselData>, List<TileStatistics>> batchProcessor(
int tileLevel, int timeBucketMinutes) {
return items -> {
if (items == null || items.isEmpty()) {
return null;
}
Map<String, TileStatistics> tileMap = new HashMap<>();
for (VesselData item : items) {
if (!item.isValidPosition()) {
continue;
}
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / timeBucketMinutes) * timeBucketMinutes);
// 요청된 레벨에 따라 처리
if (tileLevel >= 0) {
// Level 0 (대해구) 처리
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String haeguKey = level0Info.tileId + "_" + bucket.toString();
TileStatistics haeguStats = tileMap.computeIfAbsent(haeguKey,
k -> TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
haeguStats.addVesselData(item);
}
}
if (tileLevel >= 1) {
// Level 1 (소해구) 처리
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String subKey = level1Info.tileId + "_" + bucket.toString();
TileStatistics subStats = tileMap.computeIfAbsent(subKey,
k -> TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
subStats.addVesselData(item);
}
}
}
// 타일별로 밀도 계산
tileMap.values().forEach(this::calculateDensity);
return new ArrayList<>(tileMap.values());
};
}
@Bean
@StepScope
public ItemProcessor<List<VesselData>, List<TileStatistics>> tileAggregationBatchProcessor(
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
return items -> {
if (items == null || items.isEmpty()) {
return null;
}
Map<String, TileStatistics> tileMap = new HashMap<>();
for (VesselData item : items) {
if (!item.isValidPosition()) {
continue;
}
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketMinutes) * bucketMinutes);
// 1. 대해구 레벨(Level 0) 처리
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String haeguKey = level0Info.tileId + "_" + bucket.toString();
TileStatistics haeguStats = tileMap.computeIfAbsent(haeguKey,
k -> TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0) // 대해구는 레벨 0
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
haeguStats.addVesselData(item);
}
// 2. 소해구 레벨(Level 1) 처리
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String subKey = level1Info.tileId + "_" + bucket.toString();
TileStatistics subStats = tileMap.computeIfAbsent(subKey,
k -> TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1) // 소해구는 레벨 1
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
subStats.addVesselData(item);
}
}
// 타일별로 밀도 계산
tileMap.values().forEach(stats -> {
calculateDensity(stats);
});
return new ArrayList<>(tileMap.values());
};
}
@Bean
@StepScope
public ItemProcessor<VesselData, List<TileStatistics>> singleItemProcessor(
@Value("#{jobParameters['tileLevel']}") Integer tileLevel,
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
final int maxLevel = (tileLevel != null) ? tileLevel : 1;
Map<String, TileStatistics> accumulator = new HashMap<>();
return item -> {
if (!item.isValidPosition()) {
return null;
}
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketMinutes) * bucketMinutes);
List<TileStatistics> result = new ArrayList<>();
// Level 0 (대해구)
if (maxLevel >= 0) {
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String key = level0Info.tileId + "_" + bucket.toString();
TileStatistics stats = accumulator.computeIfAbsent(key,
k -> TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
stats.addVesselData(item);
// 일정 개수가 쌓이면 출력
if (stats.getTotalPoints() % 1000 == 0) {
calculateDensity(stats);
result.add(stats);
}
}
}
// Level 1 (소해구)
if (maxLevel >= 1) {
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String key = level1Info.tileId + "_" + bucket.toString();
TileStatistics stats = accumulator.computeIfAbsent(key,
k -> TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
stats.addVesselData(item);
// 일정 개수가 쌓이면 출력
if (stats.getTotalPoints() % 1000 == 0) {
calculateDensity(stats);
result.add(stats);
}
}
}
return result.isEmpty() ? null : result;
};
}
/**
* 타일의 선박 밀도 계산
*/
private void calculateDensity(TileStatistics stats) {
if (stats.getVesselCount() == null || stats.getVesselCount() == 0) {
stats.setVesselDensity(BigDecimal.ZERO);
return;
}
// 타일 면적 가져오기 (km²)
double tileArea = geoUtils.getTileArea(stats.getTileId());
if (tileArea > 0) {
// 밀도 = 선박 / 면적
BigDecimal density = BigDecimal.valueOf(stats.getVesselCount())
.divide(BigDecimal.valueOf(tileArea), 6, RoundingMode.HALF_UP);
stats.setVesselDensity(density);
} else {
stats.setVesselDensity(BigDecimal.ZERO);
}
}
}

파일 보기

@ -76,8 +76,7 @@ public class VesselTrackProcessor implements ItemProcessor<List<VesselData>, Lis
.collect(Collectors.toList());
VesselTrack track = VesselTrack.builder()
.sigSrcCd(first.getSigSrcCd())
.targetId(first.getTargetId())
.mmsi(first.getMmsi())
.timeBucket(timeBucket)
.trackPoints(trackPoints)
.pointCount(trackPoints.size())

파일 보기

@ -0,0 +1,246 @@
package gc.mda.signal_batch.batch.reader;
import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
import com.github.benmanes.caffeine.cache.RemovalCause;
import com.github.benmanes.caffeine.cache.stats.CacheStats;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.time.OffsetDateTime;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
/**
* AIS Target Caffeine 캐시 매니저
*
* key: MMSI (String) 문자 혼합 MMSI 장비 지원
* value: AisTargetEntity
*
* 동작:
* - 1분 주기 API Reader Writer에서 캐시 업데이트
* - 5분 집계 Job에서 캐시 스냅샷 추출 VesselData 변환
* - 기존 데이터보다 최신(messageTimestamp 기준) 경우에만 업데이트
*
* TTL (프로파일별):
* - local: 5분, dev: 60분, prod/prod-mpr: 120분
*/
@Slf4j
@Component
public class AisTargetCacheManager {
private Cache<String, AisTargetEntity> cache;
/**
* 트랙 누적 버퍼 1분 API 호출마다 위치를 append, 5분 집계 drain
* AtomicReference swap으로 drain lock-free 처리
*/
private final AtomicReference<ConcurrentHashMap<String, List<AisTargetEntity>>> trackBufferRef =
new AtomicReference<>(new ConcurrentHashMap<>());
@Value("${app.cache.ais-target.ttl-minutes:120}")
private long ttlMinutes;
@Value("${app.cache.ais-target.max-size:300000}")
private int maxSize;
@PostConstruct
public void init() {
this.cache = Caffeine.newBuilder()
.maximumSize(maxSize)
.expireAfterWrite(ttlMinutes, TimeUnit.MINUTES)
.recordStats()
.removalListener((String key, AisTargetEntity value, RemovalCause cause) -> {
if (cause != RemovalCause.REPLACED) {
log.trace("캐시 제거 - MMSI: {}, 원인: {}", key, cause);
}
})
.build();
log.info("AIS Target Caffeine 캐시 초기화 - TTL: {}분, 최대 크기: {}", ttlMinutes, maxSize);
}
// ==================== 단건 조회/업데이트 ====================
public Optional<AisTargetEntity> get(String mmsi) {
return Optional.ofNullable(cache.getIfPresent(mmsi));
}
public void put(AisTargetEntity entity) {
if (entity == null || entity.getMmsi() == null) {
return;
}
String mmsi = entity.getMmsi();
AisTargetEntity existing = cache.getIfPresent(mmsi);
if (existing == null || isNewer(entity, existing)) {
cache.put(mmsi, entity);
}
}
// ==================== 배치 조회/업데이트 ====================
public Map<String, AisTargetEntity> getAll(List<String> mmsiList) {
if (mmsiList == null || mmsiList.isEmpty()) {
return Collections.emptyMap();
}
return cache.getAllPresent(mmsiList);
}
/**
* 여러 데이터 일괄 저장/업데이트
* 기존 데이터보다 최신인 경우에만 업데이트
*/
public void putAll(List<AisTargetEntity> entities) {
if (entities == null || entities.isEmpty()) {
return;
}
int updated = 0;
int skipped = 0;
for (AisTargetEntity entity : entities) {
if (entity == null || entity.getMmsi() == null) {
continue;
}
AisTargetEntity existing = cache.getIfPresent(entity.getMmsi());
if (existing == null || isNewer(entity, existing)) {
cache.put(entity.getMmsi(), entity);
updated++;
} else {
skipped++;
}
}
log.debug("캐시 배치 업데이트 - 입력: {}, 업데이트: {}, 스킵: {}, 현재 크기: {}",
entities.size(), updated, skipped, cache.estimatedSize());
}
// ==================== 캐시 스냅샷 (t_ais_position 동기화용) ====================
/**
* 캐시의 모든 데이터 조회 (AisPositionSyncStep에서 사용)
*/
public Collection<AisTargetEntity> getAllValues() {
return cache.asMap().values();
}
// ==================== 트랙 누적 버퍼 (5분 집계용) ====================
/**
* 1분 API 호출 결과를 트랙 버퍼에 누적
* MMSI별로 위치 이력을 쌓아 5분 집계 LineStringM 생성에 사용
*/
public void appendAllForTrack(List<AisTargetEntity> entities) {
if (entities == null || entities.isEmpty()) {
return;
}
ConcurrentHashMap<String, List<AisTargetEntity>> buffer = trackBufferRef.get();
int appended = 0;
for (AisTargetEntity entity : entities) {
if (entity == null || entity.getMmsi() == null
|| entity.getLat() == null || entity.getLon() == null) {
continue;
}
buffer.computeIfAbsent(entity.getMmsi(),
k -> Collections.synchronizedList(new ArrayList<>())).add(entity);
appended++;
}
log.debug("트랙 버퍼 누적: {} 건 (버퍼 내 선박 수: {})", appended, buffer.size());
}
/**
* 트랙 버퍼를 drain하여 반환하고 버퍼로 교체 (5분 집계 Job에서 호출)
* AtomicReference swap으로 1분 Writer와 lock-free 동시성 보장
*
* @return MMSI별 누적 위치 목록 (보통 MMSI당 ~5개 포인트)
*/
public Map<String, List<AisTargetEntity>> drainTrackBuffer() {
ConcurrentHashMap<String, List<AisTargetEntity>> drained =
trackBufferRef.getAndSet(new ConcurrentHashMap<>());
long totalPoints = drained.values().stream().mapToLong(List::size).sum();
log.info("트랙 버퍼 drain: {} 선박, {} 포인트", drained.size(), totalPoints);
return drained;
}
/**
* 트랙 버퍼 현재 크기 (모니터링용)
*/
public Map<String, Object> getTrackBufferStats() {
ConcurrentHashMap<String, List<AisTargetEntity>> buffer = trackBufferRef.get();
long totalPoints = buffer.values().stream().mapToLong(List::size).sum();
Map<String, Object> stats = new LinkedHashMap<>();
stats.put("vesselCount", buffer.size());
stats.put("totalPoints", totalPoints);
stats.put("avgPointsPerVessel", buffer.isEmpty() ? 0 : String.format("%.1f", (double) totalPoints / buffer.size()));
return stats;
}
// ==================== 캐시 관리 ====================
public void evict(String mmsi) {
cache.invalidate(mmsi);
}
public void clear() {
long size = cache.estimatedSize();
cache.invalidateAll();
log.info("캐시 전체 삭제 - {} 건", size);
}
public long size() {
return cache.estimatedSize();
}
public void cleanup() {
cache.cleanUp();
}
// ==================== 통계 ====================
public Map<String, Object> getStats() {
CacheStats stats = cache.stats();
Map<String, Object> result = new LinkedHashMap<>();
result.put("estimatedSize", cache.estimatedSize());
result.put("maxSize", maxSize);
result.put("ttlMinutes", ttlMinutes);
result.put("hitCount", stats.hitCount());
result.put("missCount", stats.missCount());
result.put("hitRate", String.format("%.2f%%", stats.hitRate() * 100));
result.put("evictionCount", stats.evictionCount());
result.put("utilizationPercent", String.format("%.2f%%", (cache.estimatedSize() * 100.0 / maxSize)));
return result;
}
public CacheStats getCacheStats() {
return cache.stats();
}
// ==================== Private ====================
private boolean isNewer(AisTargetEntity newEntity, AisTargetEntity existing) {
OffsetDateTime newTs = newEntity.getMessageTimestamp();
OffsetDateTime existingTs = existing.getMessageTimestamp();
if (newTs == null) return false;
if (existingTs == null) return true;
return newTs.isAfter(existingTs);
}
}

파일 보기

@ -0,0 +1,86 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetApiResponse;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetDto;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemReader;
import org.springframework.web.reactive.function.client.WebClient;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
/**
* S&P Global AIS API Reader (Spring Batch ItemReader)
*
* API: POST /AisSvc.svc/AIS/GetTargetsEnhanced
* 요청: {"sinceSeconds": "60"}
* 응답: ~33,000건/
*
* 동작:
* - read() 호출 API를 호출하여 전체 데이터를 가져옴
* - 이후 read() 호출마다 건씩 반환 (Spring Batch chunk 처리)
* - 모든 데이터를 반환하면 null을 반환하여 Step 종료
*/
@Slf4j
public class AisTargetDataReader implements ItemReader<AisTargetDto> {
private static final String API_PATH = "/AisSvc.svc/AIS/GetTargetsEnhanced";
private final WebClient webClient;
private final int sinceSeconds;
private Iterator<AisTargetDto> iterator;
private boolean fetched = false;
public AisTargetDataReader(WebClient webClient, int sinceSeconds) {
this.webClient = webClient;
this.sinceSeconds = sinceSeconds;
}
@Override
public AisTargetDto read() {
if (!fetched) {
List<AisTargetDto> data = fetchDataFromApi();
this.iterator = data.iterator();
this.fetched = true;
}
if (iterator != null && iterator.hasNext()) {
return iterator.next();
}
// Step 종료 다음 실행을 위해 상태 리셋
fetched = false;
iterator = null;
return null;
}
private List<AisTargetDto> fetchDataFromApi() {
try {
log.info("[AisTargetDataReader] API 호출 시작: POST {} (sinceSeconds: {})",
API_PATH, sinceSeconds);
AisTargetApiResponse response = webClient.post()
.uri(API_PATH)
.bodyValue(Map.of("sinceSeconds", String.valueOf(sinceSeconds)))
.retrieve()
.bodyToMono(AisTargetApiResponse.class)
.block();
if (response != null && response.getTargetArr() != null) {
List<AisTargetDto> targets = response.getTargetArr();
log.info("[AisTargetDataReader] API 호출 완료: {} 건 조회", targets.size());
return targets;
} else {
log.warn("[AisTargetDataReader] API 응답이 비어있습니다");
return Collections.emptyList();
}
} catch (Exception e) {
log.error("[AisTargetDataReader] API 호출 실패: {}", e.getMessage(), e);
return Collections.emptyList();
}
}
}

파일 보기

@ -0,0 +1,132 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemReader;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.ZoneId;
import java.util.*;
import java.util.stream.Collectors;
/**
* Caffeine 캐시 기반 선박 궤적 데이터 Reader
*
* AisTargetCacheManager에서 캐시 스냅샷을 추출하여
* VesselData 형식으로 변환, MMSI + 5분 time_bucket별 그룹화하여 반환.
* 하나의 MMSI가 여러 time_bucket에 걸친 데이터를 가질 있으므로
* (MMSI, time_bucket) 조합이 별도의 처리 단위가 된다.
*
* 기존 InMemoryVesselTrackDataReader + VesselTrackDataJobListener 대체
*/
@Slf4j
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class CacheBasedVesselTrackDataReader implements ItemReader<List<VesselData>> {
private final AisTargetCacheManager cacheManager;
private final int staleDataThresholdDays;
private Iterator<List<VesselData>> groupIterator;
private boolean initialized = false;
@Override
public List<VesselData> read() {
if (!initialized) {
initialize();
initialized = true;
}
if (groupIterator != null && groupIterator.hasNext()) {
return groupIterator.next();
}
return null; // 이상 데이터 없음
}
private void initialize() {
// 트랙 버퍼에서 누적 데이터 drain (1분마다 쌓인 위치 이력)
Map<String, List<AisTargetEntity>> trackBuffer = cacheManager.drainTrackBuffer();
if (trackBuffer.isEmpty()) {
log.info("트랙 버퍼에 데이터 없음 — 궤적 생성 스킵");
groupIterator = Collections.emptyIterator();
return;
}
// AisTargetEntity VesselData 변환 + MMSI × 5분 time_bucket 이중 그룹화
LocalDateTime staleCutoff = LocalDateTime.now().minusDays(staleDataThresholdDays);
List<List<VesselData>> allGroups = new ArrayList<>();
long totalPoints = 0;
int totalVessels = 0;
int skippedStaleGroups = 0;
for (Map.Entry<String, List<AisTargetEntity>> entry : trackBuffer.entrySet()) {
List<VesselData> vesselDataList = entry.getValue().stream()
.filter(e -> e.getLat() != null && e.getLon() != null)
.map(this::toVesselData)
.sorted(Comparator.comparing(VesselData::getMessageTime,
Comparator.nullsLast(Comparator.naturalOrder())))
.collect(Collectors.toList());
if (vesselDataList.isEmpty()) {
continue;
}
totalVessels++;
// MMSI 내에서 5분 time_bucket별로 서브 그룹 분할
Map<LocalDateTime, List<VesselData>> bucketGroups = vesselDataList.stream()
.collect(Collectors.groupingBy(
(VesselData vd) -> calculateTimeBucket(vd.getMessageTime()),
LinkedHashMap::new,
Collectors.toList()));
for (Map.Entry<LocalDateTime, List<VesselData>> bucketEntry : bucketGroups.entrySet()) {
if (bucketEntry.getKey().isBefore(staleCutoff)) {
skippedStaleGroups++;
continue;
}
allGroups.add(bucketEntry.getValue());
totalPoints += bucketEntry.getValue().size();
}
}
if (skippedStaleGroups > 0) {
log.info("Stale data 필터: {}개 그룹 스킵 ({}일 이전 데이터)", skippedStaleGroups, staleDataThresholdDays);
}
log.info("트랙 버퍼 Reader 초기화: {} 선박, {} 그룹(MMSI×버킷), {} 포인트 (평균 {}pt/그룹)",
totalVessels, allGroups.size(), totalPoints,
allGroups.isEmpty() ? "0.0" : String.format("%.1f", (double) totalPoints / allGroups.size()));
groupIterator = allGroups.iterator();
}
private LocalDateTime calculateTimeBucket(LocalDateTime messageTime) {
return messageTime.withSecond(0).withNano(0)
.minusMinutes(messageTime.getMinute() % 5);
}
private VesselData toVesselData(AisTargetEntity entity) {
LocalDateTime messageTime = entity.getMessageTimestamp() != null
? entity.getMessageTimestamp().atZoneSameInstant(ZoneId.systemDefault()).toLocalDateTime()
: LocalDateTime.now();
return VesselData.builder()
.mmsi(entity.getMmsi())
.messageTime(messageTime)
.lat(entity.getLat())
.lon(entity.getLon())
.sog(entity.getSog() != null ? BigDecimal.valueOf(entity.getSog()) : null)
.cog(entity.getCog() != null ? BigDecimal.valueOf(entity.getCog()) : null)
.heading(entity.getHeading() != null ? entity.getHeading().intValue() : null)
.shipNm(entity.getName())
.shipTy(entity.getVesselType())
.rot(entity.getRot())
.build();
}
}

파일 보기

@ -0,0 +1,121 @@
package gc.mda.signal_batch.batch.reader;
import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import jakarta.annotation.PostConstruct;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;
import java.time.OffsetDateTime;
import java.time.ZoneOffset;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
/**
* 중국 허가선박 전용 캐시
*
* - 대상 MMSI(~1,400척) 별도 관리
* - TTL: expireAfterWrite (마지막 put 이후 N일 경과 만료)
* - key: MMSI (String)
*/
@Slf4j
@Component
@RequiredArgsConstructor
public class ChnPrmShipCacheManager {
private final ChnPrmShipProperties properties;
private Cache<String, AisTargetEntity> cache;
@PostConstruct
public void init() {
this.cache = Caffeine.newBuilder()
.maximumSize(properties.getMaxSize())
.expireAfterWrite(properties.getTtlDays(), TimeUnit.DAYS)
.recordStats()
.build();
log.info("ChnPrmShip 캐시 초기화 - TTL: {}일, 최대 크기: {}, 대상 MMSI: {}건",
properties.getTtlDays(), properties.getMaxSize(), properties.getMmsiSet().size());
}
/**
* 대상 MMSI에 해당하는 항목만 필터링하여 캐시에 저장
*/
public int putIfTarget(List<AisTargetEntity> items) {
if (items == null || items.isEmpty()) {
return 0;
}
int updated = 0;
for (AisTargetEntity item : items) {
if (!properties.isTarget(item.getMmsi())) {
continue;
}
AisTargetEntity existing = cache.getIfPresent(item.getMmsi());
if (existing == null || isNewerOrEqual(item, existing)) {
cache.put(item.getMmsi(), item);
updated++;
}
}
if (updated > 0) {
log.debug("ChnPrmShip 캐시 업데이트 - 입력: {}, 대상 저장: {}, 현재 크기: {}",
items.size(), updated, cache.estimatedSize());
}
return updated;
}
/**
* 시간 범위 캐시 데이터 조회
*/
public List<AisTargetEntity> getByTimeRange(int minutes) {
OffsetDateTime threshold = OffsetDateTime.now(ZoneOffset.UTC).minusMinutes(minutes);
return cache.asMap().values().stream()
.filter(entity -> entity.getMessageTimestamp() != null)
.filter(entity -> entity.getMessageTimestamp().isAfter(threshold))
.collect(Collectors.toList());
}
/**
* 워밍업용 직접 저장 (시간 비교 없이)
*/
public void putAll(List<AisTargetEntity> entities) {
if (entities == null || entities.isEmpty()) {
return;
}
for (AisTargetEntity entity : entities) {
if (entity != null && entity.getMmsi() != null) {
cache.put(entity.getMmsi(), entity);
}
}
}
public long size() {
return cache.estimatedSize();
}
public Map<String, Object> getStats() {
var stats = cache.stats();
return Map.of(
"estimatedSize", cache.estimatedSize(),
"maxSize", properties.getMaxSize(),
"ttlDays", properties.getTtlDays(),
"targetMmsiCount", properties.getMmsiSet().size(),
"hitCount", stats.hitCount(),
"missCount", stats.missCount(),
"hitRate", String.format("%.2f%%", stats.hitRate() * 100)
);
}
private boolean isNewerOrEqual(AisTargetEntity candidate, AisTargetEntity existing) {
if (candidate.getMessageTimestamp() == null) return false;
if (existing.getMessageTimestamp() == null) return true;
return !candidate.getMessageTimestamp().isBefore(existing.getMessageTimestamp());
}
}

파일 보기

@ -0,0 +1,134 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import gc.mda.signal_batch.global.util.SignalKindCode;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.time.OffsetDateTime;
import java.time.ZoneOffset;
import java.util.ArrayList;
import java.util.List;
/**
* 기동 ChnPrmShip 캐시 워밍업
*
* t_ais_position 테이블에서 대상 MMSI의 데이터를 조회하여 캐시를 채운다.
* 이후 배치 수집에서 실시간 데이터가 캐시를 갱신한다.
*/
@Slf4j
@Component
@RequiredArgsConstructor
public class ChnPrmShipCacheWarmer implements ApplicationRunner {
private static final int DB_QUERY_CHUNK_SIZE = 500;
private final ChnPrmShipProperties properties;
private final ChnPrmShipCacheManager cacheManager;
@Qualifier("queryJdbcTemplate")
private final JdbcTemplate queryJdbcTemplate;
@Override
public void run(ApplicationArguments args) {
if (!properties.isWarmupEnabled()) {
log.info("ChnPrmShip 캐시 워밍업 비활성화");
return;
}
if (properties.getMmsiSet().isEmpty()) {
log.warn("ChnPrmShip 대상 MMSI가 없어 워밍업을 건너뜁니다");
return;
}
OffsetDateTime since = OffsetDateTime.now(ZoneOffset.UTC)
.minusDays(properties.getWarmupDays());
log.info("ChnPrmShip 캐시 워밍업 시작 - 대상: {}건, 조회 범위: 최근 {}일",
properties.getMmsiSet().size(), properties.getWarmupDays());
long startTime = System.currentTimeMillis();
List<String> mmsiList = new ArrayList<>(properties.getMmsiSet());
int totalLoaded = 0;
for (int i = 0; i < mmsiList.size(); i += DB_QUERY_CHUNK_SIZE) {
List<String> chunk = mmsiList.subList(i,
Math.min(i + DB_QUERY_CHUNK_SIZE, mmsiList.size()));
try {
List<AisTargetEntity> fromDb = queryLatestByMmsiSince(chunk, since);
fromDb.forEach(entity -> {
if (entity.getSignalKindCode() == null) {
SignalKindCode kindCode = SignalKindCode.resolve(
entity.getVesselType(), entity.getExtraInfo());
entity.setSignalKindCode(kindCode.getCode());
}
});
cacheManager.putAll(fromDb);
totalLoaded += fromDb.size();
} catch (Exception e) {
log.warn("ChnPrmShip 워밍업 DB 조회 실패 (chunk {}/{}): {}",
i / DB_QUERY_CHUNK_SIZE + 1,
(mmsiList.size() + DB_QUERY_CHUNK_SIZE - 1) / DB_QUERY_CHUNK_SIZE,
e.getMessage());
}
}
long elapsed = System.currentTimeMillis() - startTime;
log.info("ChnPrmShip 캐시 워밍업 완료 - 대상: {}, 로딩: {}건, 소요: {}ms",
properties.getMmsiSet().size(), totalLoaded, elapsed);
}
private List<AisTargetEntity> queryLatestByMmsiSince(List<String> mmsiList, OffsetDateTime since) {
String placeholders = String.join(",", mmsiList.stream().map(m -> "?").toList());
String sql = "SELECT mmsi, imo, name, callsign, vessel_type, extra_info, " +
"lat, lon, heading, sog, cog, rot, length, width, draught, " +
"destination, eta, status, message_timestamp, signal_kind_code, class_type " +
"FROM signal.t_ais_position " +
"WHERE mmsi IN (" + placeholders + ") " +
"AND message_timestamp >= ?";
Object[] params = new Object[mmsiList.size() + 1];
for (int j = 0; j < mmsiList.size(); j++) {
params[j] = mmsiList.get(j);
}
params[mmsiList.size()] = since;
return queryJdbcTemplate.query(sql, params, (rs, rowNum) -> mapRow(rs));
}
private AisTargetEntity mapRow(ResultSet rs) throws SQLException {
return AisTargetEntity.builder()
.mmsi(rs.getString("mmsi"))
.imo(rs.getObject("imo") != null ? rs.getLong("imo") : null)
.name(rs.getString("name"))
.callsign(rs.getString("callsign"))
.vesselType(rs.getString("vessel_type"))
.extraInfo(rs.getString("extra_info"))
.lat(rs.getObject("lat") != null ? rs.getDouble("lat") : null)
.lon(rs.getObject("lon") != null ? rs.getDouble("lon") : null)
.heading(rs.getObject("heading") != null ? rs.getDouble("heading") : null)
.sog(rs.getObject("sog") != null ? rs.getDouble("sog") : null)
.cog(rs.getObject("cog") != null ? rs.getDouble("cog") : null)
.rot(rs.getObject("rot") != null ? rs.getInt("rot") : null)
.length(rs.getObject("length") != null ? rs.getInt("length") : null)
.width(rs.getObject("width") != null ? rs.getInt("width") : null)
.draught(rs.getObject("draught") != null ? rs.getDouble("draught") : null)
.destination(rs.getString("destination"))
.eta(rs.getObject("eta") != null ? rs.getObject("eta", OffsetDateTime.class) : null)
.status(rs.getString("status"))
.messageTimestamp(rs.getObject("message_timestamp") != null
? rs.getObject("message_timestamp", OffsetDateTime.class) : null)
.signalKindCode(rs.getString("signal_kind_code"))
.classType(rs.getString("class_type"))
.build();
}
}

파일 보기

@ -0,0 +1,61 @@
package gc.mda.signal_batch.batch.reader;
import jakarta.annotation.PostConstruct;
import lombok.Getter;
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.core.io.DefaultResourceLoader;
import org.springframework.core.io.Resource;
import org.springframework.stereotype.Component;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.util.Collections;
import java.util.Set;
import java.util.stream.Collectors;
/**
* 중국 허가선박(ChnPrmShip) 설정
*
* 대상 MMSI 목록을 리소스 파일에서 로딩하여 Set으로 보관한다.
* MMSI는 String 타입 문자 혼합 장비 지원
*/
@Slf4j
@Getter
@Setter
@Component
@ConfigurationProperties(prefix = "app.chnprmship")
public class ChnPrmShipProperties {
private String mmsiResourcePath = "classpath:chnprmship-mmsi.txt";
private int ttlDays = 2;
private int maxSize = 2000;
private boolean warmupEnabled = true;
private int warmupDays = 2;
private Set<String> mmsiSet = Collections.emptySet();
@PostConstruct
public void init() {
try {
Resource resource = new DefaultResourceLoader().getResource(mmsiResourcePath);
try (BufferedReader reader = new BufferedReader(
new InputStreamReader(resource.getInputStream(), StandardCharsets.UTF_8))) {
mmsiSet = reader.lines()
.map(String::trim)
.filter(line -> !line.isEmpty() && !line.startsWith("#"))
.collect(Collectors.toUnmodifiableSet());
}
log.info("ChnPrmShip MMSI 로딩 완료 - {}건 (경로: {})", mmsiSet.size(), mmsiResourcePath);
} catch (Exception e) {
log.warn("ChnPrmShip MMSI 로딩 실패 - 경로: {}, 오류: {} (비활성화됨)", mmsiResourcePath, e.getMessage());
mmsiSet = Collections.emptySet();
}
}
public boolean isTarget(String mmsi) {
return mmsi != null && mmsiSet.contains(mmsi);
}
}

파일 보기

@ -1,54 +0,0 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.global.util.VesselDataHolder;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.AfterStep;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.item.ItemReader;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.util.Iterator;
import java.util.List;
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
@Slf4j
public class InMemoryVesselDataReader implements ItemReader<VesselData> {
private final VesselDataHolder dataHolder;
private Iterator<VesselData> iterator;
private boolean initialized = false;
@BeforeStep
public void beforeStep(StepExecution stepExecution) {
List<VesselData> data = dataHolder.getData();
this.iterator = data.iterator();
this.initialized = true;
log.info("Initialized reader with {} items for step: {}",
data.size(), stepExecution.getStepName());
}
@Override
public VesselData read() {
if (!initialized) {
throw new IllegalStateException("Reader not initialized");
}
if (iterator.hasNext()) {
return iterator.next();
}
return null;
}
@AfterStep
public void afterStep(StepExecution stepExecution) {
iterator = null;
initialized = false;
}
}

파일 보기

@ -1,73 +0,0 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.global.util.VesselTrackDataHolder;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemReader;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import java.util.*;
import java.util.stream.Collectors;
@Slf4j
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class InMemoryVesselTrackDataReader implements ItemReader<List<VesselData>> {
private final VesselTrackDataHolder dataHolder;
private final int chunkSize;
private Iterator<Map.Entry<String, List<VesselData>>> groupIterator;
private List<List<VesselData>> currentChunk;
private Iterator<List<VesselData>> chunkIterator;
private boolean initialized = false;
public void initialize() {
// 선박별로 그룹화 (sig_src_cd + target_id)
Map<String, List<VesselData>> groupedData = dataHolder.getAllVesselData().stream()
.collect(Collectors.groupingBy(VesselData::getVesselKey));
// 그룹 내에서 시간순 정렬
groupedData.forEach((key, dataList) ->
dataList.sort(Comparator.comparing(VesselData::getMessageTime)));
groupIterator = groupedData.entrySet().iterator();
currentChunk = new ArrayList<>();
log.info("Initialized track reader with {} vessel groups", groupedData.size());
}
@Override
public List<VesselData> read() {
if (!initialized) {
initialize();
initialized = true;
}
// 현재 청크에서 데이터 반환
if (chunkIterator != null && chunkIterator.hasNext()) {
return chunkIterator.next();
}
// 새로운 청크 생성
currentChunk.clear();
int count = 0;
while (groupIterator.hasNext() && count < chunkSize) {
Map.Entry<String, List<VesselData>> entry = groupIterator.next();
currentChunk.add(entry.getValue());
count++;
}
if (currentChunk.isEmpty()) {
return null; // 이상 데이터 없음
}
chunkIterator = currentChunk.iterator();
return chunkIterator.next();
}
}

파일 보기

@ -1,181 +0,0 @@
package gc.mda.signal_batch.batch.reader;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.Partitioner;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class PartitionedReader {
@Qualifier("collectJdbcTemplate")
private final JdbcTemplate collectJdbcTemplate;
@StepScope
public Partitioner dayPartitioner(@Value("#{jobParameters['processingDate']}") LocalDate processingDate) {
return gridSize -> {
Map<String, ExecutionContext> partitions = new HashMap<>();
// 파티션 존재 확인
String partitionName = generatePartitionName(processingDate);
if (checkPartitionExists(partitionName)) {
// 시간대별로 파티션 생성 (gridSize 고려)
int hoursPerPartition = 24 / Math.min(gridSize, 24);
int actualPartitions = Math.min(gridSize, 24);
for (int i = 0; i < actualPartitions; i++) {
ExecutionContext context = new ExecutionContext();
int startHour = i * hoursPerPartition;
int endHour = (i == actualPartitions - 1) ? 24 : (i + 1) * hoursPerPartition;
context.put("partition", partitionName);
context.put("startTime", processingDate.atTime(startHour, 0));
context.put("endTime", processingDate.atTime(endHour, 0));
context.put("partitionIndex", i);
partitions.put("partition-" + i, context);
}
log.info("Created {} partitions for table {}", partitions.size(), partitionName);
} else {
// 파티션이 없는 경우 처리
log.warn("Partition {} does not exist. Creating fallback partition.", partitionName);
// 동적으로 파티션 생성 시도
if (createMissingPartition(processingDate)) {
// 재귀 호출로 다시 파티셔닝
return dayPartitioner(processingDate).partition(gridSize);
}
// 실패 단일 파티션으로 처리
ExecutionContext context = new ExecutionContext();
context.put("partition", ""); // 전체 테이블에서 날짜 조건으로 읽기
context.put("startTime", processingDate.atStartOfDay());
context.put("endTime", processingDate.plusDays(1).atStartOfDay());
context.put("partitionIndex", 0);
partitions.put("partition-fallback", context);
}
return partitions;
};
}
/**
* 시간 범위 기반 파티셔너
*/
@StepScope
public Partitioner rangePartitioner(
@Value("#{jobParameters['startTime']}") LocalDateTime startTime,
@Value("#{jobParameters['endTime']}") LocalDateTime endTime,
@Value("#{jobParameters['partitionCount']}") Integer partitionCount) {
return gridSize -> {
Map<String, ExecutionContext> partitions = new HashMap<>();
// 날짜별로 그룹화
Map<LocalDate, List<LocalDateTime>> dateGroups = groupByDate(startTime, endTime);
int partitionIndex = 0;
for (Map.Entry<LocalDate, List<LocalDateTime>> entry : dateGroups.entrySet()) {
LocalDate date = entry.getKey();
String partitionName = findPartitionForDate(date);
// 날짜에 대해 시간 범위 분할
LocalDateTime dayStart = entry.getValue().get(0);
LocalDateTime dayEnd = entry.getValue().get(1);
long totalMinutes = java.time.Duration.between(dayStart, dayEnd).toMinutes();
int subPartitions = Math.max(1, (int)(totalMinutes / 60)); // 시간 단위로 분할
for (int i = 0; i < subPartitions; i++) {
ExecutionContext context = new ExecutionContext();
LocalDateTime partStart = dayStart.plusHours(i);
LocalDateTime partEnd = (i == subPartitions - 1) ? dayEnd : dayStart.plusHours(i + 1);
context.put("startTime", partStart);
context.put("endTime", partEnd);
context.put("partition", partitionName != null ? partitionName : "");
context.put("partitionIndex", partitionIndex++);
partitions.put("range-partition-" + partitionIndex, context);
}
}
log.info("Created {} range partitions for period {} to {}",
partitions.size(), startTime, endTime);
return partitions;
};
}
private String generatePartitionName(LocalDate date) {
// YYMMDD 형식으로 변경
return "sig_test_" + date.format(DateTimeFormatter.ofPattern("yyMMdd"));
}
private boolean checkPartitionExists(String partitionName) {
String sql = "SELECT EXISTS (SELECT 1 FROM pg_tables WHERE schemaname = 'signal' AND tablename = ?)";
return Boolean.TRUE.equals(collectJdbcTemplate.queryForObject(sql, Boolean.class, partitionName));
}
private String findPartitionForDate(LocalDate date) {
String partitionName = generatePartitionName(date);
return checkPartitionExists(partitionName) ? partitionName : null;
}
private boolean createMissingPartition(LocalDate date) {
try {
String partitionName = generatePartitionName(date);
String sql = String.format("""
CREATE TABLE IF NOT EXISTS signal.%s PARTITION OF signal.sig_test
FOR VALUES FROM ('%s') TO ('%s')
""", partitionName, date, date.plusDays(1));
collectJdbcTemplate.execute(sql);
log.info("Successfully created missing partition: {}", partitionName);
return true;
} catch (Exception e) {
log.error("Failed to create missing partition for date: {}", date, e);
return false;
}
}
private Map<LocalDate, List<LocalDateTime>> groupByDate(LocalDateTime start, LocalDateTime end) {
Map<LocalDate, List<LocalDateTime>> groups = new HashMap<>();
LocalDate currentDate = start.toLocalDate();
while (!currentDate.isAfter(end.toLocalDate())) {
LocalDateTime dayStart = currentDate.equals(start.toLocalDate()) ?
start : currentDate.atStartOfDay();
LocalDateTime dayEnd = currentDate.equals(end.toLocalDate()) ?
end : currentDate.plusDays(1).atStartOfDay();
groups.put(currentDate, List.of(dayStart, dayEnd));
currentDate = currentDate.plusDays(1);
}
return groups;
}
}

파일 보기

@ -1,408 +0,0 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.batch.item.database.JdbcPagingItemReader;
import org.springframework.batch.item.database.Order;
import org.springframework.batch.item.database.support.PostgresPagingQueryProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import jakarta.annotation.PostConstruct;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.HashMap;
import java.util.Map;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class VesselDataReader {
private final DataSource collectDataSource;
private final JdbcTemplate collectJdbcTemplate;
@Value("${vessel.filter.zero-coordinates.enabled:false}")
private boolean filterZeroCoordinates;
private static final DateTimeFormatter PARTITION_FORMATTER = DateTimeFormatter.ofPattern("yyMMdd");
public VesselDataReader(
@Qualifier("collectDataSource") DataSource collectDataSource,
@Qualifier("collectJdbcTemplate") JdbcTemplate collectJdbcTemplate) {
this.collectDataSource = collectDataSource;
this.collectJdbcTemplate = collectJdbcTemplate;
}
@PostConstruct
public void init() {
logDataSourceInfo();
log.info("Zero coordinates filter enabled: {}", filterZeroCoordinates);
}
/**
* 0 근처 좌표 필터링 조건 생성
*/
private String getZeroCoordinatesFilter() {
if (filterZeroCoordinates) {
return "AND NOT (lat BETWEEN -1 AND 1 AND lon BETWEEN -1 AND 1) ";
}
return "";
}
/**
* 최신 위치만 가져오는 최적화된 Reader
* DISTINCT ON을 사용하여 선박의 최신 위치만 조회
*/
public JdbcCursorItemReader<VesselData> vesselLatestPositionReader(
LocalDateTime startTime,
LocalDateTime endTime,
String partition) {
log.info("Creating optimized latest position reader from {} to {}", startTime, endTime);
JdbcCursorItemReader<VesselData> reader = new JdbcCursorItemReader<VesselData>() {
@Override
protected void openCursor(Connection con) {
try {
// search_path 설정
try (var stmt = con.createStatement()) {
stmt.execute("SET search_path TO signal, public");
}
} catch (Exception e) {
log.error("Error setting search_path in cursor", e);
throw new RuntimeException("Failed to set search_path", e);
}
super.openCursor(con);
}
};
reader.setDataSource(collectDataSource);
reader.setName("vesselLatestPositionReader");
// 성능 최적화 설정
reader.setFetchSize(10000); // 줄임 (최신 위치만 가져오므로)
reader.setMaxRows(0);
reader.setQueryTimeout(300);
reader.setVerifyCursorPosition(false);
reader.setUseSharedExtendedConnection(false);
reader.setSaveState(false);
String tableName = determineTableName(partition, startTime);
log.info("Using table: {}", tableName);
// 최신 위치만 가져오는 SQL - DISTINCT ON 사용
String sql = String.format("""
SELECT DISTINCT ON (sig_src_cd, target_id)
message_time, real_time, sig_src_cd, target_id,
lat, lon, sog, cog, heading, ship_nm, ship_ty, rot, posacc,
sensor_id, base_st_id, mode, gps_sttus, battery_sttus,
vts_cd, mmsi, vpass_id, ship_no
FROM signal.%s
WHERE message_time >= ? AND message_time < ?
AND sig_src_cd != '000005'
AND lat BETWEEN -90 AND 90
AND lon BETWEEN -180 AND 180
%s
ORDER BY sig_src_cd, target_id, message_time DESC
""", tableName, getZeroCoordinatesFilter());
reader.setSql(sql);
reader.setPreparedStatementSetter(ps -> {
ps.setObject(1, Timestamp.valueOf(startTime));
ps.setObject(2, Timestamp.valueOf(endTime));
});
reader.setRowMapper(new OptimizedVesselDataRowMapper());
// 예상 데이터 건수 로그
try {
Integer expectedCount = collectJdbcTemplate.queryForObject(
"""
SELECT COUNT(*) FROM (
SELECT DISTINCT ON (sig_src_cd, target_id) 1
FROM signal.%s
WHERE message_time >= ? AND message_time < ?
AND sig_src_cd != '000005'
) t
""".formatted(tableName),
Integer.class,
startTime, endTime
);
log.info("Expected record count (latest positions only): {}", expectedCount);
} catch (Exception e) {
log.warn("Could not get expected count: {}", e.getMessage());
}
return reader;
}
/**
* 기존 Cursor Reader (전체 데이터) - 타일 집계 등에 필요한 경우
*/
public JdbcCursorItemReader<VesselData> vesselDataCursorReader(
LocalDateTime startTime,
LocalDateTime endTime,
String partition) {
log.info("Creating cursor reader for partition: {} from {} to {}",
partition, startTime, endTime);
JdbcCursorItemReader<VesselData> reader = new JdbcCursorItemReader<VesselData>() {
@Override
protected void openCursor(Connection con) {
try {
try (var stmt = con.createStatement()) {
stmt.execute("SET search_path TO signal, public");
}
} catch (Exception e) {
log.error("Error setting search_path in cursor", e);
throw new RuntimeException("Failed to set search_path", e);
}
super.openCursor(con);
}
};
reader.setDataSource(collectDataSource);
reader.setName("vesselDataCursorReader");
reader.setFetchSize(50000);
reader.setMaxRows(0);
reader.setQueryTimeout(1800);
reader.setVerifyCursorPosition(false);
reader.setUseSharedExtendedConnection(false);
reader.setSaveState(false);
String tableName = determineTableName(partition, startTime);
log.info("Determined table name: {} for startTime: {}", tableName, startTime);
// 전체 데이터 조회 SQL (타일 집계용)
StringBuilder sql = new StringBuilder();
sql.append("SELECT /*+ PARALLEL(8) */ ");
sql.append("message_time, real_time, sig_src_cd, target_id, ");
sql.append("lat, lon, sog, cog, heading, ship_nm, ship_ty, rot, posacc, ");
sql.append("sensor_id, base_st_id, mode, gps_sttus, battery_sttus, ");
sql.append("vts_cd, mmsi, vpass_id, ship_no ");
sql.append("FROM signal.").append(tableName).append(" ");
sql.append("WHERE message_time >= ? AND message_time < ? AND sig_src_cd != '000005' ");
sql.append(getZeroCoordinatesFilter());
sql.append("ORDER BY message_time, sig_src_cd, target_id");
reader.setSql(sql.toString());
reader.setPreparedStatementSetter(ps -> {
ps.setTimestamp(1, Timestamp.valueOf(startTime));
ps.setTimestamp(2, Timestamp.valueOf(endTime));
});
reader.setRowMapper(new OptimizedVesselDataRowMapper());
return reader;
}
/**
* 기존 Paging Reader (작은 데이터셋용)
*/
public JdbcPagingItemReader<VesselData> vesselDataPagingReader(
LocalDateTime startTime,
LocalDateTime endTime,
String partition) {
JdbcPagingItemReader<VesselData> reader = new JdbcPagingItemReader<>();
reader.setDataSource(collectDataSource);
reader.setPageSize(10000);
reader.setFetchSize(10000);
reader.setRowMapper(new OptimizedVesselDataRowMapper());
String tableName = determineTableName(partition, startTime);
PostgresPagingQueryProvider queryProvider = new PostgresPagingQueryProvider();
queryProvider.setSelectClause("SELECT message_time, real_time, sig_src_cd, target_id, " +
"lat, lon, sog, cog, heading, ship_nm, ship_ty, rot, posacc, " +
"sensor_id, base_st_id, mode, gps_sttus, battery_sttus, " +
"vts_cd, mmsi, vpass_id, ship_no ");
queryProvider.setFromClause("FROM signal." + tableName);
String whereClause = "WHERE message_time >= :startTime AND message_time < :endTime and sig_src_cd != '000005' "
+ getZeroCoordinatesFilter();
queryProvider.setWhereClause(whereClause);
Map<String, Order> sortKeys = new HashMap<>();
sortKeys.put("message_time", Order.ASCENDING);
sortKeys.put("sig_src_cd", Order.ASCENDING);
sortKeys.put("target_id", Order.ASCENDING);
queryProvider.setSortKeys(sortKeys);
reader.setQueryProvider(queryProvider);
Map<String, Object> parameterValues = new HashMap<>();
parameterValues.put("startTime", startTime);
parameterValues.put("endTime", endTime);
reader.setParameterValues(parameterValues);
try {
reader.afterPropertiesSet();
} catch (Exception e) {
log.error("Failed to initialize JdbcPagingItemReader", e);
throw new RuntimeException("Reader initialization failed", e);
}
return reader;
}
/**
* 파티션 테이블 이름 결정
*/
private String determineTableName(String partition, LocalDateTime startTime) {
if (partition != null && !partition.isEmpty()) {
log.debug("Using specified partition: {}", partition);
return partition;
}
LocalDateTime targetTime = startTime != null ? startTime : LocalDateTime.now();
String partitionSuffix = targetTime.format(PARTITION_FORMATTER);
String tableName = "sig_test_" + partitionSuffix;
try {
Boolean exists = collectJdbcTemplate.queryForObject(
"SELECT EXISTS (SELECT 1 FROM pg_tables WHERE schemaname = 'signal' AND tablename = ?)",
Boolean.class,
tableName
);
if (Boolean.TRUE.equals(exists)) {
log.info("Auto-selected partition table: {}", tableName);
return tableName;
} else {
log.warn("Partition table {} does not exist, using sig_test", tableName);
return "sig_test";
}
} catch (Exception e) {
log.error("Error checking partition table existence", e);
return "sig_test";
}
}
/**
* 최적화된 RowMapper
*/
public static class OptimizedVesselDataRowMapper implements RowMapper<VesselData> {
@Override
public VesselData mapRow(ResultSet rs, int rowNum) throws SQLException {
VesselData data = new VesselData();
Timestamp messageTime = rs.getTimestamp(1);
if (messageTime != null) {
data.setMessageTime(messageTime.toLocalDateTime());
}
Timestamp realTime = rs.getTimestamp(2);
if (realTime != null) {
data.setRealTime(realTime.toLocalDateTime());
}
data.setSigSrcCd(rs.getString(3));
data.setTargetId(rs.getString(4));
data.setLat(rs.getDouble(5));
data.setLon(rs.getDouble(6));
data.setSog(rs.getBigDecimal(7));
data.setCog(rs.getBigDecimal(8));
data.setHeading(getIntegerFromNumeric(rs, 9));
data.setShipNm(rs.getString(10));
data.setShipTy(rs.getString(11));
data.setRot(getIntegerFromNumeric(rs, 12));
data.setPosacc(getIntegerFromNumeric(rs, 13));
data.setSensorId(rs.getString(14));
data.setBaseStId(rs.getString(15));
data.setMode(getIntegerFromNumeric(rs, 16));
data.setGpsSttus(getIntegerFromNumeric(rs, 17));
data.setBatterySttus(getIntegerFromNumeric(rs, 18));
data.setVtsCd(rs.getString(19));
data.setMmsi(rs.getString(20));
data.setVpassId(rs.getString(21));
data.setShipNo(rs.getString(22));
return data;
}
private Integer getIntegerFromNumeric(ResultSet rs, int columnIndex) throws SQLException {
Object value = rs.getObject(columnIndex);
if (value == null || rs.wasNull()) {
return null;
}
if (value instanceof java.math.BigDecimal) {
return ((java.math.BigDecimal) value).intValue();
} else if (value instanceof Integer) {
return (Integer) value;
} else if (value instanceof Number) {
return ((Number) value).intValue();
} else if (value instanceof String) {
try {
return Integer.parseInt((String) value);
} catch (NumberFormatException e) {
return null;
}
}
return null;
}
}
private void logDataSourceInfo() {
try {
String info = getDataSourceInfo(collectDataSource);
log.info("VesselDataReader initialized with DataSource: {}", info);
} catch (Exception e) {
log.error("Failed to get DataSource info", e);
}
}
private String getDataSourceInfo(DataSource dataSource) {
try (Connection conn = dataSource.getConnection()) {
DatabaseMetaData meta = conn.getMetaData();
String url = meta.getURL();
String user = meta.getUserName();
String db = conn.getCatalog();
String schema = conn.getSchema();
return String.format("URL=%s, User=%s, DB=%s, Schema=%s", url, user, db, schema);
} catch (Exception e) {
return "Unknown (" + e.getMessage() + ")";
}
}
@SuppressWarnings("unused")
private void testConnection(String tableName) {
try {
try (Connection conn = collectDataSource.getConnection()) {
try (var stmt = conn.createStatement()) {
stmt.execute("SET search_path TO signal, public");
}
String testSql = "SELECT COUNT(*) FROM signal." + tableName + " LIMIT 1";
try (var stmt = conn.createStatement();
var rs = stmt.executeQuery(testSql)) {
if (rs.next()) {
log.info("Direct connection test successful, count: {}", rs.getInt(1));
}
}
}
} catch (Exception e) {
log.error("Connection test failed", e);
}
}
}

파일 보기

@ -87,11 +87,11 @@ public class AbnormalTrackWriter implements ItemWriter<AbnormalDetectionResult>
String sql = String.format("""
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd, target_id, time_bucket, %s,
mmsi, time_bucket, %s,
abnormal_type, abnormal_reason, distance_nm, avg_speed,
max_speed, point_count, source_table
) VALUES (?, ?, ?, public.ST_GeomFromText(?::text, 4326), ?, ?::jsonb, ?, ?, ?, ?, ?)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
) VALUES (?, ?, public.ST_GeomFromText(?::text, 4326), ?, ?::jsonb, ?, ?, ?, ?, ?)
ON CONFLICT (mmsi, time_bucket, source_table)
DO UPDATE SET
%s = EXCLUDED.%s,
abnormal_type = EXCLUDED.abnormal_type,
@ -137,8 +137,7 @@ public class AbnormalTrackWriter implements ItemWriter<AbnormalDetectionResult>
}
batchArgs.add(new Object[] {
track.getSigSrcCd(),
track.getTargetId(),
track.getMmsi(),
Timestamp.valueOf(track.getTimeBucket()),
geomWkt,
mainAbnormalType,

파일 보기

@ -0,0 +1,58 @@
package gc.mda.signal_batch.batch.writer;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.batch.reader.ChnPrmShipCacheManager;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import gc.mda.signal_batch.global.util.SignalKindCode;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemWriter;
import org.springframework.stereotype.Component;
import java.util.List;
/**
* AIS Target 캐시 Writer
*
* 처리 순서:
* 1. SignalKindCode 치환 (vesselType + extraInfo MDA 범례코드)
* 2. AisTargetCacheManager에 일괄 저장
* 3. ChnPrmShipCacheManager에 대상 MMSI만 필터 저장
*
* DB 저장은 Phase 3의 AisPositionSyncStep에서 5분 집계 Job에 편승하여 수행.
*/
@Slf4j
@Component
@RequiredArgsConstructor
public class AisTargetCacheWriter implements ItemWriter<AisTargetEntity> {
private final AisTargetCacheManager cacheManager;
private final ChnPrmShipCacheManager chnPrmShipCacheManager;
@Override
public void write(Chunk<? extends AisTargetEntity> chunk) {
List<? extends AisTargetEntity> items = chunk.getItems();
log.debug("AIS Target 캐시 업데이트 시작: {} 건", items.size());
// 1. SignalKindCode 치환
items.forEach(item -> {
SignalKindCode kindCode = SignalKindCode.resolve(item.getVesselType(), item.getExtraInfo());
item.setSignalKindCode(kindCode.getCode());
});
// 2. 메인 캐시 업데이트 (최신 위치 t_ais_position 동기화용)
@SuppressWarnings("unchecked")
List<AisTargetEntity> entityList = (List<AisTargetEntity>) items;
cacheManager.putAll(entityList);
// 3. 트랙 버퍼에 누적 (5분 집계 LineStringM 생성용)
cacheManager.appendAllForTrack(entityList);
log.debug("AIS Target 캐시 업데이트 완료: {} 건 (캐시 크기: {})",
items.size(), cacheManager.size());
// 4. ChnPrmShip 전용 캐시 업데이트
chnPrmShipCacheManager.putIfTarget(entityList);
}
}

파일 보기

@ -1,702 +0,0 @@
package gc.mda.signal_batch.batch.writer;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import com.google.common.collect.Lists;
import jakarta.annotation.PostConstruct;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.postgresql.copy.CopyManager;
import org.postgresql.core.BaseConnection;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemWriter;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import org.springframework.util.StopWatch;
import javax.sql.DataSource;
import java.io.*;
import java.sql.Connection;
import java.sql.Timestamp;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.*;
import java.util.stream.Collectors;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class OptimizedBulkInsertWriter implements DisposableBean {
private final DataSource queryDataSource;
private final JdbcTemplate queryJdbcTemplate;
public OptimizedBulkInsertWriter(
@Qualifier("queryDataSource") DataSource queryDataSource,
@Qualifier("queryJdbcTemplate") JdbcTemplate queryJdbcTemplate) {
this.queryDataSource = queryDataSource;
this.queryJdbcTemplate = queryJdbcTemplate;
System.out.println("========================================");
System.out.println("!!! OptimizedBulkInsertWriter initialized !!!");
System.out.println("queryDataSource: " + queryDataSource);
System.out.println("queryJdbcTemplate DataSource: " + queryJdbcTemplate.getDataSource());
System.out.println("========================================");
}
private final ObjectMapper objectMapper = createObjectMapper();
private static ObjectMapper createObjectMapper() {
ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(new JavaTimeModule());
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
mapper.setDateFormat(new java.text.SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"));
mapper.setTimeZone(java.util.TimeZone.getTimeZone("Asia/Seoul"));
return mapper;
}
@Value("${vessel.batch.bulk-insert.batch-size:50000}")
private int batchSize;
@Value("${vessel.batch.bulk-insert.parallel-threads:4}")
private int parallelThreads;
@Value("${vessel.batch.bulk-insert.use-binary-copy:false}")
private boolean useBinaryCopy;
private volatile ExecutorService executorService;
private static final DateTimeFormatter TIMESTAMP_FORMATTER =
DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
@PostConstruct
public void init() {
initializeExecutorService();
}
/**
* ExecutorService 초기화 또는 재초기화
*/
private synchronized void initializeExecutorService() {
if (executorService == null || executorService.isShutdown() || executorService.isTerminated()) {
if (executorService != null && !executorService.isShutdown()) {
executorService.shutdown();
}
int threadCount = Math.max(8, Runtime.getRuntime().availableProcessors() * 2);
executorService = Executors.newFixedThreadPool(threadCount,
new ThreadFactoryBuilder()
.setNameFormat("bulk-insert-worker-%d")
.setDaemon(true) // 데몬 스레드로 설정하여 JVM 종료 자동 정리
.build());
log.info("ExecutorService initialized with {} threads", threadCount);
}
}
/**
* ExecutorService 상태 확인 필요시 재초기화
*/
private ExecutorService getHealthyExecutorService() {
if (executorService == null || executorService.isShutdown() || executorService.isTerminated()) {
log.warn("ExecutorService is not healthy, reinitializing...");
initializeExecutorService();
}
return executorService;
}
/**
* TileStatistics Bulk Writer
*/
public ItemWriter<List<TileStatistics>> tileStatisticsBulkWriter() {
return new ItemWriter<List<TileStatistics>>() {
@Override
public void write(Chunk<? extends List<TileStatistics>> chunk) throws Exception {
List<TileStatistics> allStats = chunk.getItems().stream()
.flatMap(List::stream)
.collect(Collectors.toList());
if (allStats.isEmpty()) {
return;
}
StopWatch stopWatch = new StopWatch();
stopWatch.start();
try {
// 파티션별로 그룹화
Map<LocalDate, List<TileStatistics>> partitionedData =
allStats.stream()
.collect(Collectors.groupingBy(
stat -> stat.getTimeBucket().toLocalDate()
));
// 병렬 처리
List<CompletableFuture<BulkInsertResult>> futures = new ArrayList<>();
for (Map.Entry<LocalDate, List<TileStatistics>> entry : partitionedData.entrySet()) {
LocalDate date = entry.getKey();
List<TileStatistics> data = entry.getValue();
// 배치 크기로 분할
Lists.partition(data, batchSize).forEach(batch -> {
try {
ExecutorService healthyExecutor = getHealthyExecutorService();
CompletableFuture<BulkInsertResult> future = CompletableFuture.supplyAsync(() ->
insertTileStatisticsBatch(date, batch), healthyExecutor
);
futures.add(future);
} catch (RejectedExecutionException e) {
log.warn("RejectedExecutionException caught, falling back to synchronous processing");
BulkInsertResult result = insertTileStatisticsBatch(date, batch);
futures.add(CompletableFuture.completedFuture(result));
}
});
}
// 모든 작업 완료 대기
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
// 결과 집계
long totalInserted = futures.stream()
.map(CompletableFuture::join)
.mapToLong(result -> result.rowsInserted)
.sum();
stopWatch.stop();
log.info("Bulk inserted {} tile statistics in {} ms",
totalInserted, stopWatch.getTotalTimeMillis());
} catch (Exception e) {
// CompletionException에서 실제 원인 확인
Throwable cause = e;
if (e instanceof CompletionException && e.getCause() != null) {
cause = e.getCause();
if (cause instanceof RuntimeException && cause.getCause() != null) {
cause = cause.getCause();
}
}
// 중복 오류는 정상적인 상황
if (cause.getMessage() != null && cause.getMessage().contains("중복된 키")) {
log.debug("Duplicate key errors detected during bulk insert, using fallback UPSERT");
} else {
log.error("Bulk insert failed, falling back to batch insert", e);
}
// 새로운 트랜잭션에서 재시도
try {
fallbackBatchInsert(allStats);
} catch (Exception fallbackEx) {
log.error("Fallback insert also failed", fallbackEx);
throw fallbackEx;
}
}
}
};
}
/**
* 개별 배치 처리
*/
private BulkInsertResult insertTileStatisticsBatch(LocalDate date,
List<TileStatistics> batch) {
String tableName = "t_tile_summary_" + date.format(DateTimeFormatter.BASIC_ISO_DATE);
// 파티션 존재 확인
if (!checkTableExists(tableName)) {
tableName = "t_tile_summary"; // 기본 테이블 사용
}
try (Connection conn = queryDataSource.getConnection()) {
BaseConnection baseConn = conn.unwrap(BaseConnection.class);
CopyManager copyManager = new CopyManager(baseConn);
if (useBinaryCopy) {
return binaryCopyInsert(copyManager, tableName, batch);
} else {
return textCopyInsert(copyManager, tableName, batch);
}
} catch (Exception e) {
if (e.getMessage() != null && e.getMessage().contains("duplicate key")) {
// 중복 키는 정상적인 상황이므로 DEBUG 레벨로 기록
log.debug("Duplicate entries detected for table {} - switching to UPSERT mode", tableName);
// 새로운 트랜잭션에서 UPSERT 실행
try {
return upsertBatch(tableName, batch);
} catch (Exception upsertEx) {
log.error("UPSERT also failed for table {}", tableName, upsertEx);
throw new RuntimeException("Both COPY and UPSERT failed", upsertEx);
}
}
log.error("Failed to insert batch for table {}", tableName, e);
throw new RuntimeException("Batch insert failed", e);
}
}
/**
* 텍스트 기반 COPY
*/
private BulkInsertResult textCopyInsert(CopyManager copyManager, String tableName,
List<TileStatistics> batch) throws Exception {
String copySql = String.format("""
COPY signal.%s (
tile_id, tile_level, time_bucket, vessel_count,
unique_vessels, total_points, avg_sog, max_sog,
vessel_density, created_at
) FROM STDIN
""", tableName);
try (PipedOutputStream pos = new PipedOutputStream();
PipedInputStream pis = new PipedInputStream(pos, 1024 * 1024); // 1MB 버퍼
PrintWriter writer = new PrintWriter(new BufferedWriter(
new OutputStreamWriter(pos, "UTF-8"), 65536))) { // 64KB 버퍼
// 비동기로 데이터 쓰기
CompletableFuture<Void> writerFuture = CompletableFuture.runAsync(() -> {
try {
for (TileStatistics stat : batch) {
writer.println(formatCsvLine(stat));
}
} finally {
writer.close();
}
});
// COPY 실행
long rowsInserted = copyManager.copyIn(copySql, pis);
// Writer 완료 대기
writerFuture.join();
return new BulkInsertResult(rowsInserted, null);
}
}
/**
* 바이너리 기반 COPY ( 빠름)
*/
private BulkInsertResult binaryCopyInsert(CopyManager copyManager, String tableName,
List<TileStatistics> batch) throws Exception {
String copySql = String.format("""
COPY signal.%s (
tile_id, tile_level, time_bucket, vessel_count,
unique_vessels, total_points, avg_sog, max_sog,
vessel_density, created_at
) FROM STDIN WITH (FORMAT BINARY)
""", tableName);
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
// PostgreSQL 바이너리 형식 헤더
writeBinaryHeader(baos);
// 데이터 쓰기
for (TileStatistics stat : batch) {
writeBinaryRow(baos, stat);
}
// 트레일러
writeBinaryTrailer(baos);
// COPY 실행
try (ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray())) {
long rowsInserted = copyManager.copyIn(copySql, bais);
return new BulkInsertResult(rowsInserted, null);
}
}
}
/**
* CSV 라인 포맷팅
*/
private String formatCsvLine(TileStatistics stat) {
String json = convertToJson(stat.getUniqueVessels());
// TEXT 형식에서는 탭과 줄바꿈만 이스케이프
String escapedJson = json.replace("\\", "\\\\")
.replace("\t", "\\t")
.replace("\n", "\\n")
.replace("\r", "\\r");
return String.format("%s\t%d\t%s\t%d\t%s\t%d\t%s\t%s\t%s\t%s",
stat.getTileId(),
stat.getTileLevel(),
stat.getTimeBucket().format(TIMESTAMP_FORMATTER),
stat.getVesselCount(),
escapedJson,
stat.getTotalPoints(),
stat.getAvgSog() != null ? stat.getAvgSog().toString() : "\\N",
stat.getMaxSog() != null ? stat.getMaxSog().toString() : "\\N",
stat.getVesselDensity() != null ? stat.getVesselDensity().toString() : "\\N",
LocalDateTime.now().format(TIMESTAMP_FORMATTER)
);
}
/**
* CSV 특수문자 이스케이프
*/
@SuppressWarnings("unused")
private String escapeCsv(String value) {
if (value == null) return "NULL";
return value.replace("\\", "\\\\")
.replace("|", "\\|")
.replace("\n", "\\n")
.replace("\r", "\\r")
.replace("\"", "\\\"");
}
/**
* JSON 이스케이프
*/
@SuppressWarnings("unused")
private String escapeJson(String json) {
if (json == null) return "NULL";
return json.replace("\\", "\\\\")
.replace("|", "\\|")
.replace("\n", "\\n")
.replace("\r", "\\r");
}
/**
* 객체를 JSON으로 변환
*/
private String convertToJson(Object obj) {
try {
if (obj == null) return "{}";
// 클래스 레벨의 objectMapper 사용
String json = objectMapper.writeValueAsString(obj);
// JSON 검증 로그
if (log.isDebugEnabled()) {
log.debug("Generated JSON: {}", json);
}
return json;
} catch (Exception e) {
log.error("Error converting to JSON: {}", obj, e);
return "{}";
}
}
/**
* UPSERT 배치 처리 (중복키 발생 )
*/
private BulkInsertResult upsertBatch(String tableName, List<TileStatistics> batch) {
// 항상 tile_level도 포함하여 처리
String sql = String.format("""
INSERT INTO signal.%s (
tile_id, tile_level, time_bucket, vessel_count,
unique_vessels, total_points, avg_sog, max_sog,
vessel_density, created_at
) VALUES (?, ?, ?, ?, ?::jsonb, ?, ?, ?, ?, ?)
ON CONFLICT (tile_id, time_bucket, tile_level) DO UPDATE SET
vessel_count = EXCLUDED.vessel_count,
unique_vessels = EXCLUDED.unique_vessels,
total_points = EXCLUDED.total_points,
avg_sog = EXCLUDED.avg_sog,
max_sog = EXCLUDED.max_sog,
vessel_density = EXCLUDED.vessel_density,
created_at = EXCLUDED.created_at
""", tableName);
long totalUpdated = 0;
// 배치 크기로 분할
for (List<TileStatistics> partition : Lists.partition(batch, 1000)) {
List<Object[]> args = partition.stream()
.map(stat -> new Object[] {
stat.getTileId(),
stat.getTileLevel(),
Timestamp.valueOf(stat.getTimeBucket()),
stat.getVesselCount(),
convertToJson(stat.getUniqueVessels()),
stat.getTotalPoints(),
stat.getAvgSog(),
stat.getMaxSog(),
stat.getVesselDensity(),
Timestamp.valueOf(LocalDateTime.now())
})
.collect(Collectors.toList());
int[] results = queryJdbcTemplate.batchUpdate(sql, args);
for (int result : results) {
totalUpdated += result;
}
}
log.info("Upserted {} records in table {}", totalUpdated, tableName);
return new BulkInsertResult(totalUpdated, null);
}
/**
* Fallback 배치 인서트
*/
private void fallbackBatchInsert(List<TileStatistics> stats) {
String sql = """
INSERT INTO signal.t_tile_summary (
tile_id, tile_level, time_bucket, vessel_count,
unique_vessels, total_points, avg_sog, max_sog,
vessel_density, created_at
) VALUES (?, ?, ?, ?, ?::jsonb, ?, ?, ?, ?, ?)
ON CONFLICT (tile_id, time_bucket, tile_level) DO UPDATE SET
vessel_count = EXCLUDED.vessel_count,
unique_vessels = EXCLUDED.unique_vessels,
total_points = EXCLUDED.total_points,
avg_sog = EXCLUDED.avg_sog,
max_sog = EXCLUDED.max_sog,
vessel_density = EXCLUDED.vessel_density,
created_at = EXCLUDED.created_at
""";
// 배치 크기로 분할
Lists.partition(stats, 1000).forEach(batch -> {
List<Object[]> args = batch.stream()
.map(stat -> new Object[] {
stat.getTileId(),
stat.getTileLevel(),
Timestamp.valueOf(stat.getTimeBucket()),
stat.getVesselCount(),
convertToJson(stat.getUniqueVessels()),
stat.getTotalPoints(),
stat.getAvgSog(),
stat.getMaxSog(),
stat.getVesselDensity(),
Timestamp.valueOf(LocalDateTime.now())
})
.collect(Collectors.toList());
queryJdbcTemplate.batchUpdate(sql, args);
});
}
/**
* AreaStatistics Bulk Writer
*/
public ItemWriter<List<AreaStatisticsProcessor.AreaStatistics>>
areaStatisticsBulkWriter() {
return new ItemWriter<List<AreaStatisticsProcessor.AreaStatistics>>() {
@Override
public void write(Chunk<? extends List<AreaStatisticsProcessor.AreaStatistics>> chunk)
throws Exception {
List<AreaStatisticsProcessor.AreaStatistics> allStats =
chunk.getItems().stream()
.flatMap(List::stream)
.collect(Collectors.toList());
if (allStats.isEmpty()) {
return;
}
// 배치 크기로 분할하여 병렬 처리
Lists.partition(allStats, batchSize)
.parallelStream()
.forEach(batch -> insertAreaStatisticsBatch(batch));
}
};
}
private void insertAreaStatisticsBatch(
List<AreaStatisticsProcessor.AreaStatistics> batch) {
try (Connection conn = queryDataSource.getConnection()) {
BaseConnection baseConn = conn.unwrap(BaseConnection.class);
CopyManager copyManager = new CopyManager(baseConn);
String copySql = """
COPY signal.t_area_statistics (
area_id, time_bucket, vessel_count,
in_count, out_count, transit_vessels,
stationary_vessels, avg_sog, created_at
) FROM STDIN WITH (FORMAT CSV, DELIMITER '|', NULL 'NULL')
""";
StringWriter writer = new StringWriter();
for (var stat : batch) {
writer.write(String.format("%s|%s|%d|%d|%d|%s|%s|%s|%s\n",
stat.getAreaId(),
stat.getTimeBucket().format(TIMESTAMP_FORMATTER),
stat.getVesselCount(),
stat.getInCount(),
stat.getOutCount(),
escapeJson(convertToJson(stat.getTransitVessels())),
escapeJson(convertToJson(stat.getStationaryVessels())),
stat.getAvgSog() != null ? stat.getAvgSog().toString() : "NULL",
LocalDateTime.now().format(TIMESTAMP_FORMATTER)
));
}
long rowsInserted = copyManager.copyIn(copySql, new StringReader(writer.toString()));
log.debug("Inserted {} area statistics", rowsInserted);
} catch (Exception e) {
log.error("Failed to bulk insert area statistics", e);
// Fallback 처리
}
}
/**
* 테이블 존재 확인
*/
private boolean checkTableExists(String tableName) {
String sql = "SELECT EXISTS (SELECT 1 FROM pg_tables WHERE schemaname = 'signal' AND tablename = ?)";
return Boolean.TRUE.equals(queryJdbcTemplate.queryForObject(sql, Boolean.class, tableName));
}
/**
* 바이너리 형식 헬퍼 메소드들
*/
private void writeBinaryHeader(ByteArrayOutputStream baos) throws IOException {
// PostgreSQL 바이너리 COPY 헤더
baos.write("PGCOPY\n\377\r\n\0".getBytes("UTF-8"));
// 플래그
writeInt32(baos, 0);
// 헤더 확장 길이
writeInt32(baos, 0);
}
private void writeBinaryTrailer(ByteArrayOutputStream baos) throws IOException {
// -1 표시 (EOF)
writeInt16(baos, -1);
}
private void writeBinaryRow(ByteArrayOutputStream baos,
TileStatistics stat) throws IOException {
// 필드
writeInt16(baos, 10);
// 필드 쓰기
writeString(baos, stat.getTileId());
writeInt32(baos, stat.getTileLevel());
writeTimestamp(baos, stat.getTimeBucket());
writeInt32(baos, stat.getVesselCount());
writeString(baos, convertToJson(stat.getUniqueVessels()));
writeInt64(baos, stat.getTotalPoints());
writeBigDecimal(baos, stat.getAvgSog());
writeBigDecimal(baos, stat.getMaxSog());
writeBigDecimal(baos, stat.getVesselDensity());
writeTimestamp(baos, LocalDateTime.now());
}
private void writeInt16(ByteArrayOutputStream baos, int value) throws IOException {
baos.write((value >> 8) & 0xFF);
baos.write(value & 0xFF);
}
private void writeInt32(ByteArrayOutputStream baos, int value) throws IOException {
baos.write((value >> 24) & 0xFF);
baos.write((value >> 16) & 0xFF);
baos.write((value >> 8) & 0xFF);
baos.write(value & 0xFF);
}
private void writeInt64(ByteArrayOutputStream baos, long value) throws IOException {
for (int i = 56; i >= 0; i -= 8) {
baos.write((int)(value >> i) & 0xFF);
}
}
private void writeString(ByteArrayOutputStream baos, String value) throws IOException {
if (value == null) {
writeInt32(baos, -1); // NULL
} else {
byte[] bytes = value.getBytes("UTF-8");
writeInt32(baos, bytes.length);
baos.write(bytes);
}
}
private void writeTimestamp(ByteArrayOutputStream baos, LocalDateTime value) throws IOException {
if (value == null) {
writeInt32(baos, -1); // NULL
} else {
// PostgreSQL timestamp 형식으로 변환
long micros = value.atZone(java.time.ZoneId.systemDefault())
.toInstant().toEpochMilli() * 1000;
writeInt32(baos, 8); // 길이
writeInt64(baos, micros);
}
}
private void writeBigDecimal(ByteArrayOutputStream baos, java.math.BigDecimal value)
throws IOException {
if (value == null) {
writeInt32(baos, -1); // NULL
} else {
writeString(baos, value.toString());
}
}
/**
* 결과 클래스
*/
private static class BulkInsertResult {
final long rowsInserted;
@SuppressWarnings("unused")
final String error;
BulkInsertResult(long rowsInserted, String error) {
this.rowsInserted = rowsInserted;
this.error = error;
}
}
/**
* 리소스 정리
*/
public void shutdown() {
if (executorService != null && !executorService.isShutdown()) {
executorService.shutdown();
try {
if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
executorService.shutdownNow();
}
} catch (InterruptedException e) {
executorService.shutdownNow();
}
}
}
@Override
public void destroy() throws Exception {
log.info("Shutting down OptimizedBulkInsertWriter ExecutorService");
if (executorService != null && !executorService.isShutdown()) {
executorService.shutdown();
try {
if (!executorService.awaitTermination(30, TimeUnit.SECONDS)) {
log.warn("ExecutorService did not terminate gracefully, forcing shutdown");
executorService.shutdownNow();
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
executorService.shutdownNow();
}
}
}
}

파일 보기

@ -1,271 +0,0 @@
package gc.mda.signal_batch.batch.writer;
import gc.mda.signal_batch.domain.vessel.model.VesselLatestPosition;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.AreaStatistics;
import gc.mda.signal_batch.global.util.ConcurrentUpdateManager;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.database.JdbcBatchItemWriter;
import org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import javax.sql.DataSource;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.*;
@Slf4j
@Configuration
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class UpsertWriter {
private final DataSource queryDataSource;
private final ConcurrentUpdateManager concurrentUpdateManager;
public UpsertWriter(
@Qualifier("queryDataSource") DataSource queryDataSource,
ConcurrentUpdateManager concurrentUpdateManager) {
this.queryDataSource = queryDataSource;
this.concurrentUpdateManager = concurrentUpdateManager;
System.out.println("========================================");
System.out.println("!!! UpsertWriter initialized !!!");
System.out.println("queryDataSource: " + queryDataSource);
System.out.println("========================================");
}
@Value("${vessel.batch.writer.use-advisory-lock:false}")
private boolean useAdvisoryLock;
@Value("${vessel.batch.writer.parallel-threads:4}")
private int parallelThreads;
private static final ExecutorService executorService = new ThreadPoolExecutor(
4, 8,
60L, TimeUnit.SECONDS,
new LinkedBlockingQueue<>(100),
new ThreadPoolExecutor.CallerRunsPolicy()
);
// shutdown hook 추가
static {
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
log.info("Shutting down executor service...");
executorService.shutdown();
try {
if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
executorService.shutdownNow();
}
} catch (InterruptedException e) {
executorService.shutdownNow();
}
}));
}
private final ObjectMapper objectMapper = new ObjectMapper()
.registerModule(new JavaTimeModule());
/**
* 최신 위치 Writer - Advisory Lock 사용
*/
@Bean
public ItemWriter<VesselLatestPosition> latestPositionWriter() {
if (useAdvisoryLock) {
return new ItemWriter<VesselLatestPosition>() {
@Override
public void write(Chunk<? extends VesselLatestPosition> chunk) throws Exception {
List<VesselLatestPosition> items = new ArrayList<>(chunk.getItems());
// 병렬 처리를 위한 분할
int batchSize = Math.max(1, items.size() / parallelThreads);
List<CompletableFuture<Void>> futures = new ArrayList<>();
for (int i = 0; i < items.size(); i += batchSize) {
int endIndex = Math.min(i + batchSize, items.size());
List<VesselLatestPosition> batch = items.subList(i, endIndex);
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
for (VesselLatestPosition position : batch) {
try {
concurrentUpdateManager.updateLatestPositionWithLock(position);
} catch (Exception e) {
log.error("Failed to update position: {}", position.getTargetId(), e);
}
}
}, executorService);
futures.add(future);
}
// 모든 작업 완료 대기
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
.get(5, TimeUnit.MINUTES);
log.debug("Updated {} vessel positions", items.size());
}
};
} else {
// 기존 방식 (Batch Update)
return defaultLatestPositionWriter();
}
}
/**
* 기본 Batch Writer
*/
private JdbcBatchItemWriter<VesselLatestPosition> defaultLatestPositionWriter() {
return customLatestPositionWriter();
}
/**
* Custom Writer - UPDATE 0건도 정상 처리
*/
private JdbcBatchItemWriter<VesselLatestPosition> customLatestPositionWriter() {
String sql = """
INSERT INTO signal.t_vessel_latest_position (
sig_src_cd, target_id, lat, lon, geom,
sog, cog, heading, ship_nm, ship_ty,
last_update, update_count, created_at
) VALUES (
:sigSrcCd, :targetId, :lat, :lon,
public.ST_SetSRID(public.ST_MakePoint(:lon, :lat), 4326),
:sog, :cog, :heading, :shipNm, :shipTy,
:lastUpdate, 1, CURRENT_TIMESTAMP
)
ON CONFLICT (sig_src_cd, target_id) DO UPDATE SET
lat = EXCLUDED.lat,
lon = EXCLUDED.lon,
geom = EXCLUDED.geom,
sog = EXCLUDED.sog,
cog = EXCLUDED.cog,
heading = EXCLUDED.heading,
ship_nm = COALESCE(EXCLUDED.ship_nm, t_vessel_latest_position.ship_nm),
ship_ty = COALESCE(EXCLUDED.ship_ty, t_vessel_latest_position.ship_ty),
last_update = EXCLUDED.last_update,
update_count = t_vessel_latest_position.update_count + 1
WHERE EXCLUDED.last_update > t_vessel_latest_position.last_update
""";
JdbcBatchItemWriter<VesselLatestPosition> writer = new JdbcBatchItemWriter<VesselLatestPosition>() {
@Override
public void write(Chunk<? extends VesselLatestPosition> chunk) throws Exception {
// assertUpdates 비활성화로 UPDATE 0건도 허용
this.setAssertUpdates(false);
super.write(chunk);
}
};
writer.setDataSource(queryDataSource);
writer.setSql(sql);
writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>());
writer.afterPropertiesSet();
return writer;
}
/**
* 구역 통계 Writer
*/
@Bean
public ItemWriter<List<AreaStatistics>> areaStatisticsWriter() {
return new ItemWriter<List<AreaStatistics>>() {
@Override
public void write(Chunk<? extends List<AreaStatistics>> chunk) throws Exception {
// 중복 제거를 위한 Map 사용
Map<String, AreaStatistics> uniqueStats = new HashMap<>();
for (List<AreaStatistics> batch : chunk.getItems()) {
for (AreaStatistics stat : batch) {
String key = stat.getAreaId() + "_" + stat.getTimeBucket();
// 중복된 경우 나중 데이터로 덮어쓰기
uniqueStats.put(key, stat);
}
}
List<AreaStatistics> allStats = new ArrayList<>(uniqueStats.values());
if (allStats.isEmpty()) {
return;
}
// 배치를 작은 단위로 분할
int batchSize = 500;
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
jdbcTemplate.setQueryTimeout(60); // 60초 타임아웃
for (int i = 0; i < allStats.size(); i += batchSize) {
int endIndex = Math.min(i + batchSize, allStats.size());
List<AreaStatistics> subBatch = allStats.subList(i, endIndex);
String sql = """
INSERT INTO signal.t_area_statistics (
area_id, time_bucket, vessel_count, in_count, out_count,
transit_vessels, stationary_vessels, avg_sog, created_at
) VALUES (
?, ?, ?, ?, ?,
?::jsonb, ?::jsonb, ?, CURRENT_TIMESTAMP
)
ON CONFLICT (area_id, time_bucket) DO UPDATE SET
vessel_count = EXCLUDED.vessel_count,
in_count = EXCLUDED.in_count,
out_count = EXCLUDED.out_count,
transit_vessels = EXCLUDED.transit_vessels,
stationary_vessels = EXCLUDED.stationary_vessels,
avg_sog = EXCLUDED.avg_sog
""";
List<Object[]> batchArgs = new ArrayList<>();
for (AreaStatistics stats : subBatch) {
batchArgs.add(new Object[]{
stats.getAreaId(),
java.sql.Timestamp.valueOf(stats.getTimeBucket()),
stats.getVesselCount(),
stats.getInCount(),
stats.getOutCount(),
objectMapper.writeValueAsString(stats.getTransitVessels()),
objectMapper.writeValueAsString(stats.getStationaryVessels()),
stats.getAvgSog()
});
}
try {
jdbcTemplate.batchUpdate(sql, batchArgs);
log.debug("Updated {} area statistics records", subBatch.size());
} catch (Exception e) {
log.error("Failed to update batch of {} area statistics", subBatch.size(), e);
throw e;
}
}
log.info("Total updated {} area statistics records", allStats.size());
}
};
}
/**
* 리소스 정리
*/
public void shutdown() {
executorService.shutdown();
try {
if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
executorService.shutdownNow();
}
} catch (InterruptedException e) {
executorService.shutdownNow();
}
}
}

파일 보기

@ -108,7 +108,7 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
}
}
// 임시 테이블 + MERGE 패턴을 사용한 Bulk Upsert
// 임시 테이블 + COPY 패턴을 사용한 Bulk Insert
private void bulkInsertTracks(List<VesselTrack> tracks, String tableName) throws Exception {
try (Connection conn = queryDataSource.getConnection()) {
conn.setAutoCommit(false);
@ -122,8 +122,7 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
try (var stmt = conn.createStatement()) {
stmt.execute(String.format("""
CREATE TEMP TABLE IF NOT EXISTS %s (
sig_src_cd VARCHAR(10),
target_id VARCHAR(30),
mmsi VARCHAR(20),
time_bucket TIMESTAMP,
track_geom GEOMETRY,
distance_nm NUMERIC,
@ -142,7 +141,7 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
// 2. COPY로 임시 테이블에 bulk insert
String copySql = String.format("""
COPY %s (
sig_src_cd, target_id, time_bucket, track_geom,
mmsi, time_bucket, track_geom,
distance_nm, avg_speed, max_speed, point_count,
start_position, end_position
) FROM STDIN
@ -156,37 +155,29 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
long rowsCopied = copyManager.copyIn(copySql, new StringReader(writer.toString()));
// 3. 임시 테이블에서 최종 테이블로 UPSERT
String upsertSql = String.format("""
// 3. 임시 테이블에서 최종 테이블로 INSERT (중복 무시)
String insertSql = String.format("""
INSERT INTO %s (
sig_src_cd, target_id, time_bucket, track_geom,
mmsi, time_bucket, track_geom,
distance_nm, avg_speed, max_speed, point_count,
start_position, end_position
)
SELECT
sig_src_cd, target_id, time_bucket, track_geom,
mmsi, time_bucket, track_geom,
distance_nm, avg_speed, max_speed, point_count,
start_position, end_position
FROM %s
ON CONFLICT (sig_src_cd, target_id, time_bucket)
DO UPDATE SET
track_geom = EXCLUDED.track_geom,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
start_position = EXCLUDED.start_position,
end_position = EXCLUDED.end_position
ON CONFLICT (mmsi, time_bucket) DO NOTHING
""", tableName, tempTableName);
int rowsUpserted;
int rowsInserted;
try (var stmt = conn.createStatement()) {
rowsUpserted = stmt.executeUpdate(upsertSql);
rowsInserted = stmt.executeUpdate(insertSql);
}
conn.commit();
log.info("Bulk upserted {} vessel tracks to {} (copied: {}, upserted: {})",
tracks.size(), tableName, rowsCopied, rowsUpserted);
log.info("Bulk inserted {} vessel tracks to {} (copied: {}, inserted: {})",
tracks.size(), tableName, rowsCopied, rowsInserted);
} catch (Exception e) {
conn.rollback();
@ -198,11 +189,10 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
private String formatTrackLine(VesselTrack track) {
StringBuilder sb = new StringBuilder();
sb.append(track.getSigSrcCd()).append('\t');
sb.append(track.getTargetId()).append('\t');
sb.append(track.getMmsi()).append('\t');
sb.append(Timestamp.valueOf(track.getTimeBucket())).append('\t');
// track_geom 사용
// track_geom
if (track.getTrackGeom() != null && !track.getTrackGeom().isEmpty()) {
sb.append(track.getTrackGeom());
} else {
@ -275,26 +265,18 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
private void fallbackInsert(List<VesselTrack> tracks, String tableName) {
String sql = String.format("""
INSERT INTO %s (
sig_src_cd, target_id, time_bucket, track_geom,
mmsi, time_bucket, track_geom,
distance_nm, avg_speed, max_speed, point_count,
start_position, end_position
) VALUES (?, ?, ?, public.ST_GeomFromText(?), ?, ?, ?, ?, ?::jsonb, ?::jsonb)
ON CONFLICT (sig_src_cd, target_id, time_bucket)
DO UPDATE SET
track_geom = EXCLUDED.track_geom,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
start_position = EXCLUDED.start_position,
end_position = EXCLUDED.end_position
) VALUES (?, ?, public.ST_GeomFromText(?), ?, ?, ?, ?, ?::jsonb, ?::jsonb)
ON CONFLICT (mmsi, time_bucket) DO NOTHING
""", tableName);
int inserted = 0;
for (VesselTrack track : tracks) {
try {
queryJdbcTemplate.update(sql,
track.getSigSrcCd(),
track.getTargetId(),
track.getMmsi(),
Timestamp.valueOf(track.getTimeBucket()),
track.getTrackGeom(),
track.getDistanceNm(),
@ -304,12 +286,11 @@ public class VesselTrackBulkWriter implements ItemWriter<List<VesselTrack>> {
track.getStartPosition() != null ? formatPositionJson(track.getStartPosition()) : null,
track.getEndPosition() != null ? formatPositionJson(track.getEndPosition()) : null
);
log.debug("Upserted track for vessel: {} to {}",
track.getSigSrcCd() + "_" + track.getTargetId(), tableName);
inserted++;
} catch (Exception e) {
log.error("Failed to upsert track for vessel: {} to {}",
track.getSigSrcCd() + "_" + track.getTargetId(), tableName, e);
log.error("Failed to insert track for vessel: {} to {}", track.getMmsi(), tableName, e);
}
}
log.info("Fallback inserted {} / {} vessel tracks to {}", inserted, tracks.size(), tableName);
}
}

파일 보기

@ -9,8 +9,6 @@ import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.*;
import javax.sql.DataSource;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.time.ZoneId;
@ -32,8 +30,7 @@ public class DebugTimeController {
@GetMapping("/time-analysis")
@Operation(summary = "시간 데이터 분석", description = "특정 선박의 항적 데이터에서 시간 정보(time_bucket, Unix timestamp)를 상세 분석합니다. DB 서버 시간, 최근 데이터, 시간 차이 분석을 포함합니다")
public Map<String, Object> analyzeTimeData(
@Parameter(description = "신호 소스 코드 (기본: 000001)") @RequestParam(defaultValue = "000001") String sigSrcCd,
@Parameter(description = "선박 ID (기본: 440331240)") @RequestParam(defaultValue = "440331240") String targetId,
@Parameter(description = "MMSI (기본: 440331240)") @RequestParam(defaultValue = "440331240") String mmsi,
@Parameter(description = "시작 시간 (형식: yyyy-MM-ddTHH:mm:ss)") @RequestParam(defaultValue = "2025-08-26T08:02:59") String startTime,
@Parameter(description = "종료 시간 (형식: yyyy-MM-ddTHH:mm:ss)") @RequestParam(defaultValue = "2025-08-27T08:02:59") String endTime) {
@ -44,8 +41,7 @@ public class DebugTimeController {
LocalDateTime end = LocalDateTime.parse(endTime);
result.put("requestInfo", Map.of(
"sigSrcCd", sigSrcCd,
"targetId", targetId,
"mmsi", mmsi,
"startTime", startTime,
"endTime", endTime,
"startTimestamp", start.atZone(ZoneId.of("Asia/Seoul")).toEpochSecond(),
@ -73,7 +69,7 @@ public class DebugTimeController {
avg_speed,
point_count
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = ? AND target_id = ?
WHERE mmsi = ?
AND time_bucket BETWEEN ? AND ?
ORDER BY time_bucket
LIMIT 10
@ -115,7 +111,7 @@ public class DebugTimeController {
return row;
},
sigSrcCd, targetId, Timestamp.valueOf(start), Timestamp.valueOf(end)
mmsi, Timestamp.valueOf(start), Timestamp.valueOf(end)
);
result.put("queryResults", dataRows);
@ -127,7 +123,7 @@ public class DebugTimeController {
EXTRACT(epoch FROM time_bucket) as time_bucket_unix,
substring(public.ST_AsText(track_geom), 1, 100) as track_sample
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = ? AND target_id = ?
WHERE mmsi = ?
ORDER BY time_bucket DESC
LIMIT 5
""";
@ -140,7 +136,7 @@ public class DebugTimeController {
row.put("track_sample", rs.getString("track_sample"));
return row;
},
sigSrcCd, targetId
mmsi
);
result.put("recentData", recentRows);

파일 보기

@ -2,9 +2,9 @@ package gc.mda.signal_batch.domain.gis.cache;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.Coordinate;
import org.locationtech.jts.geom.Geometry;
import org.locationtech.jts.geom.GeometryFactory;
import org.locationtech.jts.geom.Point;
import org.locationtech.jts.geom.Polygon;
import org.locationtech.jts.io.WKTReader;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
@ -22,8 +22,8 @@ import java.util.stream.Collectors;
public class AreaBoundaryCache {
private final DataSource queryDataSource;
private final Map<String, Polygon> areaPolygons = new ConcurrentHashMap<>();
private final Map<Integer, Polygon> haeguPolygons = new ConcurrentHashMap<>();
private final Map<String, Geometry> areaPolygons = new ConcurrentHashMap<>();
private final Map<Integer, Geometry> haeguPolygons = new ConcurrentHashMap<>();
private final GeometryFactory geometryFactory = new GeometryFactory();
private final WKTReader wktReader = new WKTReader(geometryFactory);
@ -52,8 +52,8 @@ public class AreaBoundaryCache {
String areaId = (String) area.get("area_id");
String wkt = (String) area.get("wkt");
try {
Polygon polygon = (Polygon) wktReader.read(wkt);
areaPolygons.put(areaId, polygon);
Geometry geom = wktReader.read(wkt);
areaPolygons.put(areaId, geom);
} catch (Exception e) {
log.warn("Failed to parse geometry for area {}: {}", areaId, e.getMessage());
}
@ -80,8 +80,8 @@ public class AreaBoundaryCache {
Integer haeguNo = (Integer) haegu.get("haegu_no");
String wkt = (String) haegu.get("wkt");
try {
Polygon polygon = (Polygon) wktReader.read(wkt);
haeguPolygons.put(haeguNo, polygon);
Geometry geom = wktReader.read(wkt);
haeguPolygons.put(haeguNo, geom);
} catch (Exception e) {
log.warn("Failed to parse geometry for haegu {}: {}", haeguNo, e.getMessage());
}
@ -115,20 +115,20 @@ public class AreaBoundaryCache {
// 특정 area에 포인트가 포함되는지 확인
public boolean isPointInArea(double lat, double lon, String areaId) {
Polygon polygon = areaPolygons.get(areaId);
if (polygon == null) return false;
Geometry geom = areaPolygons.get(areaId);
if (geom == null) return false;
Point point = geometryFactory.createPoint(new Coordinate(lon, lat));
return polygon.contains(point);
return geom.contains(point);
}
// 특정 haegu에 포인트가 포함되는지 확인
public boolean isPointInHaegu(double lat, double lon, Integer haeguNo) {
Polygon polygon = haeguPolygons.get(haeguNo);
if (polygon == null) return false;
Geometry geom = haeguPolygons.get(haeguNo);
if (geom == null) return false;
Point point = geometryFactory.createPoint(new Coordinate(lon, lat));
return polygon.contains(point);
return geom.contains(point);
}
// Job 실행 캐시 갱신

파일 보기

@ -130,7 +130,7 @@ public class AreaSearchController {
**접촉 판정 조건:**
- 선박 모두 폴리곤 **내부** 있을 때만 접촉으로 간주
- 대상: sigSrcCd 필터 (기본 "000001") 선박끼리만 비교
- 대상: AIS 수집 선박끼리만 비교
- 접촉 구간의 **평균 거리** <= maxContactDistanceMeters
- 접촉 지속 시간 >= minContactDurationMinutes

파일 보기

@ -47,10 +47,6 @@ public class VesselContactRequest {
@Schema(description = "최대 접촉 판정 거리 (미터, 50~5000)", example = "1000", requiredMode = Schema.RequiredMode.REQUIRED)
private Double maxContactDistanceMeters;
@Schema(description = "대상 선박 신호소스 코드 (기본: 000001)", example = "000001", defaultValue = "000001")
@Builder.Default
private String sigSrcCd = "000001";
@Data
@Builder
@NoArgsConstructor

파일 보기

@ -79,7 +79,7 @@ public class VesselContactResponse {
@Schema(description = "접촉 선박 개별 정보")
public static class VesselContactInfo {
@Schema(description = "선박 고유 ID (sigSrcCd_targetId)", example = "000001_440113620")
@Schema(description = "선박 고유 ID (MMSI)", example = "440113620")
private String vesselId;
@Schema(description = "선박명", example = "SAM SUNG 2HO")
@ -94,9 +94,6 @@ public class VesselContactResponse {
@Schema(description = "국적 MID 코드 (MMSI 앞 3자리)", example = "440")
private String nationalCode;
@Schema(description = "통합선박 ID", example = "440113620___440113620_")
private String integrationTargetId;
// 폴리곤 체류 정보
@Schema(description = "폴리곤 내 첫 시각 (Unix 초)", example = "1738360000")
private Long insidePolygonStartTs;
@ -145,7 +142,7 @@ public class VesselContactResponse {
@Schema(description = "접촉에 관련된 고유 선박 수", example = "5")
private Integer totalVesselsInvolved;
@Schema(description = "sigSrcCd 필터 후 폴리곤 내 전체 선박 수", example = "42")
@Schema(description = "폴리곤 내 전체 선박 수", example = "42")
private Integer totalVesselsInPolygon;
@Schema(description = "처리 소요 시간 (ms)", example = "2340")

파일 보기

@ -262,13 +262,10 @@ public class AreaSearchService {
merged.put(entry.getKey(), CompactVesselTrack.builder()
.vesselId(first.getVesselId())
.sigSrcCd(first.getSigSrcCd())
.targetId(first.getTargetId())
.nationalCode(first.getNationalCode())
.shipName(first.getShipName())
.shipType(first.getShipType())
.shipKindCode(first.getShipKindCode())
.integrationTargetId(first.getIntegrationTargetId())
.geometry(geo)
.timestamps(ts)
.speeds(sp)

파일 보기

@ -5,11 +5,7 @@ import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselStatsResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.domain.vessel.dto.IntegrationVessel;
import gc.mda.signal_batch.domain.vessel.service.IntegrationVesselService;
import gc.mda.signal_batch.global.util.IntegrationSignalConstants;
import gc.mda.signal_batch.global.util.NationalCodeUtil;
import gc.mda.signal_batch.global.util.ShipKindCodeConverter;
import gc.mda.signal_batch.global.util.SignalKindCode;
import gc.mda.signal_batch.global.util.TrackSimplificationUtils;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
@ -23,7 +19,6 @@ import java.sql.Timestamp;
import java.time.Duration;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
@ -38,12 +33,9 @@ import java.util.stream.Collectors;
public class GisService {
private final DataSource queryDataSource;
private final IntegrationVesselService integrationVesselService;
public GisService(@Qualifier("queryDataSource") DataSource queryDataSource,
IntegrationVesselService integrationVesselService) {
public GisService(@Qualifier("queryDataSource") DataSource queryDataSource) {
this.queryDataSource = queryDataSource;
this.integrationVesselService = integrationVesselService;
}
public List<GisBoundaryResponse> getHaeguBoundaries() {
@ -71,7 +63,7 @@ public class GisService {
String sql = """
SELECT haegu_no,
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as vessel_count,
COUNT(DISTINCT mmsi) as vessel_count,
COALESCE(SUM(distance_nm), 0) as total_distance,
COALESCE(AVG(avg_speed), 0) as avg_speed,
COUNT(*) as active_tracks
@ -124,7 +116,7 @@ public class GisService {
String sql = """
SELECT area_id,
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as vessel_count,
COUNT(DISTINCT mmsi) as vessel_count,
COALESCE(SUM(distance_nm), 0) as total_distance,
COALESCE(AVG(avg_speed), 0) as avg_speed,
COUNT(*) as active_tracks
@ -156,70 +148,60 @@ public class GisService {
LocalDateTime now = LocalDateTime.now();
LocalDateTime startTime = now.minusMinutes(minutes);
// 1시간 이상인 경우 여러 테이블 조합
if (minutes > 60) {
// 현재 시간의 정시
LocalDateTime currentHour = now.withMinute(0).withSecond(0).withNano(0);
if (minutes <= 1440) { // 24시간 이하
// 1. hourly 테이블에서 과거 데이터 조회
if (minutes <= 1440) {
String hourlySql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_hourly t
WHERE EXISTS (
SELECT 1 FROM signal.t_grid_vessel_tracks g
WHERE g.sig_src_cd = t.sig_src_cd
AND g.target_id = t.target_id
WHERE g.mmsi = t.mmsi
AND g.haegu_no = %d
AND g.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
AND t.time_bucket < '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(haeguNo, startTime, startTime, currentHour);
allTracks.addAll(jdbcTemplate.query(hourlySql, this::mapTrackResponse));
} else {
// daily 테이블 사용 (추후 구현)
}
// 2. 5min 테이블에서 최근 데이터 조회 (아직 집계되지 않은 부분)
String recentSql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_grid_vessel_tracks g
WHERE g.sig_src_cd = t.sig_src_cd
AND g.target_id = t.target_id
WHERE g.mmsi = t.mmsi
AND g.haegu_no = %d
AND g.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(haeguNo, currentHour, currentHour);
allTracks.addAll(jdbcTemplate.query(recentSql, this::mapTrackResponse));
} else {
// 1시간 이하는 5분 테이블만 사용
String sql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_grid_vessel_tracks g
WHERE g.sig_src_cd = t.sig_src_cd
AND g.target_id = t.target_id
WHERE g.mmsi = t.mmsi
AND g.haegu_no = %d
AND g.time_bucket >= NOW() - INTERVAL '%d minutes'
)
AND t.time_bucket >= NOW() - INTERVAL '%d minutes'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(haeguNo, minutes, minutes);
allTracks = jdbcTemplate.query(sql, this::mapTrackResponse);
@ -233,8 +215,7 @@ public class GisService {
private TrackResponse mapTrackResponse(ResultSet rs, int rowNum) throws SQLException {
return TrackResponse.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(rs.getObject("time_bucket", LocalDateTime.class))
.trackGeom(rs.getString("track_geom"))
.distanceNm(rs.getBigDecimal("distance_nm"))
@ -251,70 +232,60 @@ public class GisService {
LocalDateTime now = LocalDateTime.now();
LocalDateTime startTime = now.minusMinutes(minutes);
// 1시간 이상인 경우 여러 테이블 조합
if (minutes > 60) {
// 현재 시간의 정시
LocalDateTime currentHour = now.withMinute(0).withSecond(0).withNano(0);
if (minutes <= 1440) { // 24시간 이하
// 1. hourly 테이블에서 과거 데이터 조회
if (minutes <= 1440) {
String hourlySql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_hourly t
WHERE EXISTS (
SELECT 1 FROM signal.t_area_vessel_tracks a
WHERE a.sig_src_cd = t.sig_src_cd
AND a.target_id = t.target_id
WHERE a.mmsi = t.mmsi
AND a.area_id = '%s'
AND a.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
AND t.time_bucket < '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(areaId, startTime, startTime, currentHour);
allTracks.addAll(jdbcTemplate.query(hourlySql, this::mapTrackResponse));
} else {
// daily 테이블 사용 (추후 구현)
}
// 2. 5min 테이블에서 최근 데이터 조회 (아직 집계되지 않은 부분)
String recentSql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_area_vessel_tracks a
WHERE a.sig_src_cd = t.sig_src_cd
AND a.target_id = t.target_id
WHERE a.mmsi = t.mmsi
AND a.area_id = '%s'
AND a.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(areaId, currentHour, currentHour);
allTracks.addAll(jdbcTemplate.query(recentSql, this::mapTrackResponse));
} else {
// 1시간 이하는 5분 테이블만 사용
String sql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_area_vessel_tracks a
WHERE a.sig_src_cd = t.sig_src_cd
AND a.target_id = t.target_id
WHERE a.mmsi = t.mmsi
AND a.area_id = '%s'
AND a.time_bucket >= NOW() - INTERVAL '%d minutes'
)
AND t.time_bucket >= NOW() - INTERVAL '%d minutes'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(areaId, minutes, minutes);
allTracks = jdbcTemplate.query(sql, this::mapTrackResponse);
@ -328,12 +299,6 @@ public class GisService {
/**
* 선박별 항적 조회 (계층적 보완 조회 + 간소화)
*
* 조회 전략:
* 1. 상위 테이블(daily hourly 5min) 순서로 조회
* 2. 테이블에서 누락 구간 감지
* 3. 누락 구간은 하위 테이블에서 보완 조회 + 상위 수준으로 간소화
* 4. 전체 시간순 정렬
*/
public List<CompactVesselTrack> getVesselTracks(VesselTracksRequest request) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
@ -342,33 +307,26 @@ public class GisService {
LocalDateTime startTime = request.getStartTime();
LocalDateTime endTime = request.getEndTime();
for (VesselTracksRequest.VesselIdentifier vessel : request.getVessels()) {
for (String mmsi : request.getVessels()) {
List<TrackResponse> tracks = queryVesselTracksWithFallback(
jdbcTemplate, vessel.getSigSrcCd(), vessel.getTargetId(), startTime, endTime);
jdbcTemplate, mmsi, startTime, endTime);
// Sort all tracks by time_bucket to ensure proper ordering
tracks.sort((t1, t2) -> t1.getTimeBucket().compareTo(t2.getTimeBucket()));
if (!tracks.isEmpty()) {
CompactVesselTrack compactTrack = buildCompactVesselTrack(vessel, tracks);
CompactVesselTrack compactTrack = buildCompactVesselTrack(mmsi, tracks);
results.add(compactTrack);
}
}
// 통합선박 필터링 적용 (isIntegration = "1" 이고 기능이 활성화된 경우)
if ("1".equals(request.getIsIntegration()) && integrationVesselService.isEnabled()) {
results = filterByIntegration(results);
}
return results;
}
/**
* 계층적 보완 조회 로직
* 상위 테이블에서 데이터가 없는 구간을 하위 테이블에서 보완
*/
private List<TrackResponse> queryVesselTracksWithFallback(
JdbcTemplate jdbcTemplate, String sigSrcCd, String targetId,
JdbcTemplate jdbcTemplate, String mmsi,
LocalDateTime startTime, LocalDateTime endTime) {
List<TrackResponse> allTracks = new ArrayList<>();
@ -376,7 +334,6 @@ public class GisService {
long hours = duration.toHours();
LocalDateTime now = LocalDateTime.now();
// 배치 완료 여유 시간 (hourly 배치는 매시 10분 시작, 5분 소요)
LocalDateTime safeHourlyBoundary = now.withMinute(0).withSecond(0).withNano(0);
if (now.getMinute() < 15) {
safeHourlyBoundary = safeHourlyBoundary.minusHours(1);
@ -393,15 +350,15 @@ public class GisService {
if (!dailyEnd.isBefore(dailyStart)) {
List<TrackResponse> dailyTracks = queryDailyTracks(
jdbcTemplate, sigSrcCd, targetId, dailyStart, dailyEnd);
jdbcTemplate, mmsi, dailyStart, dailyEnd);
for (TrackResponse track : dailyTracks) {
coveredDays.add(track.getTimeBucket().toLocalDate());
}
allTracks.addAll(dailyTracks);
log.debug("[FALLBACK] Daily: {} days covered for {}_{}",
coveredDays.size(), sigSrcCd, targetId);
log.debug("[FALLBACK] Daily: {} days covered for {}",
coveredDays.size(), mmsi);
}
}
@ -418,9 +375,8 @@ public class GisService {
LocalDateTime dayStart = missingDay.atStartOfDay();
LocalDateTime dayEnd = missingDay.plusDays(1).atStartOfDay();
// Hourly로 보완 조회 (Daily 수준으로 간소화)
List<TrackResponse> fallbackTracks = queryHourlyTracks(
jdbcTemplate, sigSrcCd, targetId, dayStart, dayEnd);
jdbcTemplate, mmsi, dayStart, dayEnd);
for (TrackResponse track : fallbackTracks) {
track.setTrackGeom(TrackSimplificationUtils.simplifyDailyTrack(track.getTrackGeom()));
@ -435,21 +391,21 @@ public class GisService {
// === 3단계: Hourly 테이블 조회 ===
Set<LocalDateTime> coveredHours = new HashSet<>();
LocalDateTime hourlyStart = hours >= 24
? safeDailyBoundary.plusDays(1) // Daily 다음날부터
? safeDailyBoundary.plusDays(1)
: startTime.withMinute(0).withSecond(0).withNano(0);
LocalDateTime hourlyEnd = endTime.isBefore(safeHourlyBoundary) ? endTime : safeHourlyBoundary;
if (hours > 1 && hourlyStart.isBefore(hourlyEnd)) {
List<TrackResponse> hourlyTracks = queryHourlyTracks(
jdbcTemplate, sigSrcCd, targetId, hourlyStart, hourlyEnd);
jdbcTemplate, mmsi, hourlyStart, hourlyEnd);
for (TrackResponse track : hourlyTracks) {
coveredHours.add(track.getTimeBucket().withMinute(0).withSecond(0).withNano(0));
}
allTracks.addAll(hourlyTracks);
log.debug("[FALLBACK] Hourly: {} hours covered for {}_{}",
coveredHours.size(), sigSrcCd, targetId);
log.debug("[FALLBACK] Hourly: {} hours covered for {}",
coveredHours.size(), mmsi);
}
// === 4단계: Hourly 누락 구간 5min에서 보완 ===
@ -460,9 +416,8 @@ public class GisService {
LocalDateTime hourStart = missingHour;
LocalDateTime hourEnd = missingHour.plusHours(1);
// 5min으로 보완 조회 (Hourly 수준으로 간소화)
List<TrackResponse> fallbackTracks = query5minTracks(
jdbcTemplate, sigSrcCd, targetId, hourStart, hourEnd);
jdbcTemplate, mmsi, hourStart, hourEnd);
for (TrackResponse track : fallbackTracks) {
track.setTrackGeom(TrackSimplificationUtils.simplifyHourlyTrack(track.getTrackGeom()));
@ -478,11 +433,11 @@ public class GisService {
LocalDateTime fiveMinStart = safeHourlyBoundary.isAfter(startTime) ? safeHourlyBoundary : startTime;
if (endTime.isAfter(fiveMinStart)) {
List<TrackResponse> fiveMinTracks = query5minTracks(
jdbcTemplate, sigSrcCd, targetId, fiveMinStart, endTime);
jdbcTemplate, mmsi, fiveMinStart, endTime);
allTracks.addAll(fiveMinTracks);
log.debug("[FALLBACK] 5min: {} segments for {}_{} ({} ~ {})",
fiveMinTracks.size(), sigSrcCd, targetId, fiveMinStart, endTime);
log.debug("[FALLBACK] 5min: {} segments for {} ({} ~ {})",
fiveMinTracks.size(), mmsi, fiveMinStart, endTime);
}
return allTracks;
@ -492,22 +447,22 @@ public class GisService {
* Daily 테이블 조회
*/
private List<TrackResponse> queryDailyTracks(
JdbcTemplate jdbcTemplate, String sigSrcCd, String targetId,
JdbcTemplate jdbcTemplate, String mmsi,
LocalDate startDate, LocalDate endDate) {
String sql = """
SELECT sig_src_cd, target_id,
SELECT mmsi,
time_bucket::timestamp as time_bucket,
public.ST_AsText(track_geom) as track_geom,
distance_nm, avg_speed, max_speed, point_count
FROM signal.t_vessel_tracks_daily
WHERE sig_src_cd = ? AND target_id = ?
WHERE mmsi = ?
AND time_bucket BETWEEN ?::date AND ?::date
ORDER BY time_bucket
""";
return jdbcTemplate.query(sql, this::mapTrackResponse,
sigSrcCd, targetId,
mmsi,
java.sql.Date.valueOf(startDate), java.sql.Date.valueOf(endDate));
}
@ -515,21 +470,21 @@ public class GisService {
* Hourly 테이블 조회
*/
private List<TrackResponse> queryHourlyTracks(
JdbcTemplate jdbcTemplate, String sigSrcCd, String targetId,
JdbcTemplate jdbcTemplate, String mmsi,
LocalDateTime startTime, LocalDateTime endTime) {
String sql = """
SELECT sig_src_cd, target_id, time_bucket,
SELECT mmsi, time_bucket,
public.ST_AsText(track_geom) as track_geom,
distance_nm, avg_speed, max_speed, point_count
FROM signal.t_vessel_tracks_hourly
WHERE sig_src_cd = ? AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ? AND time_bucket < ?
ORDER BY time_bucket
""";
return jdbcTemplate.query(sql, this::mapTrackResponse,
sigSrcCd, targetId,
mmsi,
Timestamp.valueOf(startTime), Timestamp.valueOf(endTime));
}
@ -537,21 +492,21 @@ public class GisService {
* 5min 테이블 조회
*/
private List<TrackResponse> query5minTracks(
JdbcTemplate jdbcTemplate, String sigSrcCd, String targetId,
JdbcTemplate jdbcTemplate, String mmsi,
LocalDateTime startTime, LocalDateTime endTime) {
String sql = """
SELECT sig_src_cd, target_id, time_bucket,
SELECT mmsi, time_bucket,
public.ST_AsText(track_geom) as track_geom,
distance_nm, avg_speed, max_speed, point_count
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = ? AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ? AND time_bucket < ?
ORDER BY time_bucket
""";
return jdbcTemplate.query(sql, this::mapTrackResponse,
sigSrcCd, targetId,
mmsi,
Timestamp.valueOf(startTime), Timestamp.valueOf(endTime));
}
@ -591,108 +546,21 @@ public class GisService {
return missingHours;
}
/**
* 통합선박 기준 필터링 (REST API용)
*/
private List<CompactVesselTrack> filterByIntegration(List<CompactVesselTrack> tracks) {
if (tracks == null || tracks.isEmpty()) {
return tracks;
}
// 1. 모든 트랙의 통합선박 정보 조회 (캐시에서)
Map<String, IntegrationVessel> vesselIntegrations = new HashMap<>();
for (CompactVesselTrack track : tracks) {
String key = track.getSigSrcCd() + "_" + track.getTargetId();
if (!vesselIntegrations.containsKey(key)) {
IntegrationVessel integration = integrationVesselService.findByVessel(
track.getSigSrcCd(), track.getTargetId()
);
vesselIntegrations.put(key, integration);
}
}
// 2. 통합선박별 그룹핑
Map<Long, List<CompactVesselTrack>> groupedByIntegration = new HashMap<>();
Map<Long, IntegrationVessel> integrationMap = new HashMap<>();
long tempSeq = -1;
for (CompactVesselTrack track : tracks) {
String key = track.getSigSrcCd() + "_" + track.getTargetId();
IntegrationVessel integration = vesselIntegrations.get(key);
Long seq;
if (integration != null) {
seq = integration.getIntgrSeq();
integrationMap.putIfAbsent(seq, integration);
} else {
seq = tempSeq--;
}
groupedByIntegration.computeIfAbsent(seq, k -> new ArrayList<>()).add(track);
}
// 3. 그룹에서 최고 우선순위 신호만 선택
List<CompactVesselTrack> result = new ArrayList<>();
for (Map.Entry<Long, List<CompactVesselTrack>> entry : groupedByIntegration.entrySet()) {
Long seq = entry.getKey();
List<CompactVesselTrack> groupTracks = entry.getValue();
if (seq < 0) {
// 통합정보 없는 단독 선박
CompactVesselTrack firstTrack = groupTracks.get(0);
String soloIntegrationId = IntegrationSignalConstants.generateSoloIntegrationId(
firstTrack.getSigSrcCd(),
firstTrack.getTargetId()
);
groupTracks.forEach(t -> t.setIntegrationTargetId(soloIntegrationId));
result.addAll(groupTracks);
} else {
// 통합선박 존재하는 신호 최고 우선순위 선택
IntegrationVessel integration = integrationMap.get(seq);
java.util.Set<String> existingSigSrcCds = groupTracks.stream()
.map(CompactVesselTrack::getSigSrcCd)
.collect(java.util.stream.Collectors.toSet());
String selectedSigSrcCd = integrationVesselService.selectHighestPriorityFromExisting(existingSigSrcCds);
List<CompactVesselTrack> selectedTracks = groupTracks.stream()
.filter(t -> t.getSigSrcCd().equals(selectedSigSrcCd))
.collect(java.util.stream.Collectors.toList());
String integrationId = integration.generateIntegrationId();
selectedTracks.forEach(t -> t.setIntegrationTargetId(integrationId));
result.addAll(selectedTracks);
}
}
log.info("[INTEGRATION_FILTER] REST API - Filtered {} tracks to {} tracks", tracks.size(), result.size());
return result;
}
private CompactVesselTrack buildCompactVesselTrack(
VesselTracksRequest.VesselIdentifier vessel,
String mmsi,
List<TrackResponse> tracks) {
String vesselId = vessel.getSigSrcCd() + "_" + vessel.getTargetId();
List<double[]> geometry = new ArrayList<>();
List<String> timestamps = new ArrayList<>();
List<Double> speeds = new ArrayList<>();
double totalDistance = 0;
double maxSpeed = 0;
int totalPoints = 0;
// WKTReader reader = new WKTReader();
for (TrackResponse track : tracks) {
if (track.getTrackGeom() != null && !track.getTrackGeom().isEmpty()) {
try {
// Parse LineStringM
String wkt = track.getTrackGeom();
if (wkt.startsWith("LINESTRING M")) {
// Extract coordinate data from WKT
String coordsPart = wkt.substring("LINESTRING M(".length() + 1, wkt.length() - 1);
String[] points = coordsPart.split(",");
@ -701,12 +569,11 @@ public class GisService {
if (parts.length >= 3) {
double lon = Double.parseDouble(parts[0]);
double lat = Double.parseDouble(parts[1]);
String timestamp = parts[2]; // Unix timestamp as string
String timestamp = parts[2];
geometry.add(new double[]{lon, lat});
timestamps.add(timestamp);
// Add SOG value if available (could be from track data)
if (track.getAvgSpeed() != null) {
speeds.add(track.getAvgSpeed().doubleValue());
} else {
@ -726,35 +593,23 @@ public class GisService {
if (track.getMaxSpeed() != null && track.getMaxSpeed().doubleValue() > maxSpeed) {
maxSpeed = track.getMaxSpeed().doubleValue();
}
if (track.getPointCount() != null) {
totalPoints += track.getPointCount();
}
}
// Calculate average speed
double avgSpeed = speeds.stream()
.filter(s -> s > 0)
.mapToDouble(Double::doubleValue)
.average()
.orElse(0.0);
// Get vessel info
Map<String, String> vesselInfo = getVesselInfo(vessel.getSigSrcCd(), vessel.getTargetId());
Map<String, String> vesselInfo = getVesselInfo(mmsi);
String shipName = vesselInfo.get("ship_name");
String shipType = vesselInfo.get("ship_type");
// Calculate nationalCode (same as WebSocket)
String nationalCode = NationalCodeUtil.calculateNationalCode(
vessel.getSigSrcCd(), vessel.getTargetId());
// Calculate shipKindCode (same as WebSocket - using name pattern matching for buoy/net detection)
String shipKindCode = ShipKindCodeConverter.getShipKindCodeWithNamePattern(
vessel.getSigSrcCd(), shipType, shipName, vessel.getTargetId());
String nationalCode = (mmsi != null && mmsi.length() >= 3) ? mmsi.substring(0, 3) : null;
String shipKindCode = SignalKindCode.resolve(shipType, null).getCode();
return CompactVesselTrack.builder()
.vesselId(vesselId)
.sigSrcCd(vessel.getSigSrcCd())
.targetId(vessel.getTargetId())
.vesselId(mmsi)
.nationalCode(nationalCode)
.geometry(geometry)
.timestamps(timestamps)
@ -769,17 +624,17 @@ public class GisService {
.build();
}
private Map<String, String> getVesselInfo(String sigSrcCd, String targetId) {
private Map<String, String> getVesselInfo(String mmsi) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
try {
String sql = """
SELECT ship_nm as ship_name, ship_ty as ship_type
FROM signal.t_vessel_latest_position
WHERE sig_src_cd = ? AND target_id = ?
SELECT ship_nm as ship_name, vessel_type as ship_type
FROM signal.t_ais_position
WHERE mmsi = ?
LIMIT 1
""";
return jdbcTemplate.queryForMap(sql, sigSrcCd, targetId)
return jdbcTemplate.queryForMap(sql, mmsi)
.entrySet().stream()
.collect(Collectors.toMap(
Map.Entry::getKey,

파일 보기

@ -3,10 +3,7 @@ package gc.mda.signal_batch.domain.gis.service;
import gc.mda.signal_batch.domain.vessel.dto.CompactVesselTrack;
import gc.mda.signal_batch.domain.vessel.dto.TrackResponse;
import gc.mda.signal_batch.domain.vessel.dto.VesselTracksRequest;
import gc.mda.signal_batch.domain.vessel.dto.IntegrationVessel;
import gc.mda.signal_batch.domain.vessel.service.IntegrationVesselService;
import gc.mda.signal_batch.global.exception.QueryTimeoutException;
import gc.mda.signal_batch.global.util.IntegrationSignalConstants;
import gc.mda.signal_batch.global.util.TrackConverter;
import gc.mda.signal_batch.global.websocket.service.ActiveQueryManager;
import gc.mda.signal_batch.global.websocket.service.CacheTrackSimplifier;
@ -29,7 +26,6 @@ import java.util.stream.Collectors;
* GIS 서비스 V2 - CompactVesselTrack 기반 응답
* WebSocket API와 동일한 응답 구조 제공
*
* Phase: REST V2 캐시 + 부하 제어 + 응답 크기 제한
* - Semaphore 기반 동시성 제어 (ActiveQueryManager 공유)
* - POST /vessels: DailyTrackCacheManager 캐시 우선 조회
* - 2단계 간소화 파이프라인 (표준 간소화 + 포인트 버짓 강제)
@ -39,7 +35,6 @@ import java.util.stream.Collectors;
public class GisServiceV2 {
private final DataSource queryDataSource;
private final IntegrationVesselService integrationVesselService;
private final ActiveQueryManager activeQueryManager;
private final DailyTrackCacheManager dailyTrackCacheManager;
private final CacheTrackSimplifier cacheTrackSimplifier;
@ -56,13 +51,11 @@ public class GisServiceV2 {
private static final long VESSEL_CACHE_TTL = 3600_000; // 1시간
public GisServiceV2(@Qualifier("queryDataSource") DataSource queryDataSource,
IntegrationVesselService integrationVesselService,
ActiveQueryManager activeQueryManager,
DailyTrackCacheManager dailyTrackCacheManager,
CacheTrackSimplifier cacheTrackSimplifier,
GisService gisService) {
this.queryDataSource = queryDataSource;
this.integrationVesselService = integrationVesselService;
this.activeQueryManager = activeQueryManager;
this.dailyTrackCacheManager = dailyTrackCacheManager;
this.cacheTrackSimplifier = cacheTrackSimplifier;
@ -71,7 +64,6 @@ public class GisServiceV2 {
/**
* 해구별 선박 항적 조회 (V2 - CompactVesselTrack 반환)
* Semaphore 부하 제어 + 간소화 파이프라인 적용
*/
public List<CompactVesselTrack> getHaeguTracks(Integer haeguNo, int minutes, boolean filterByIntegration) {
String queryId = "rest-haegu-" + haeguNo + "-" + UUID.randomUUID().toString().substring(0, 8);
@ -91,69 +83,61 @@ public class GisServiceV2 {
if (minutes <= 1440) {
String hourlySql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_hourly t
WHERE EXISTS (
SELECT 1 FROM signal.t_grid_vessel_tracks g
WHERE g.sig_src_cd = t.sig_src_cd
AND g.target_id = t.target_id
WHERE g.mmsi = t.mmsi
AND g.haegu_no = %d
AND g.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
AND t.time_bucket < '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(haeguNo, startTime, startTime, currentHour);
rawTracks.addAll(jdbcTemplate.query(hourlySql, this::mapTrackResponse));
}
String recentSql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_grid_vessel_tracks g
WHERE g.sig_src_cd = t.sig_src_cd
AND g.target_id = t.target_id
WHERE g.mmsi = t.mmsi
AND g.haegu_no = %d
AND g.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(haeguNo, startTime, currentHour);
rawTracks.addAll(jdbcTemplate.query(recentSql, this::mapTrackResponse));
} else {
String sql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_grid_vessel_tracks g
WHERE g.sig_src_cd = t.sig_src_cd
AND g.target_id = t.target_id
WHERE g.mmsi = t.mmsi
AND g.haegu_no = %d
AND g.time_bucket >= NOW() - INTERVAL '%d minutes'
)
AND t.time_bucket >= NOW() - INTERVAL '%d minutes'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(haeguNo, minutes, minutes);
rawTracks = jdbcTemplate.query(sql, this::mapTrackResponse);
}
List<CompactVesselTrack> result = TrackConverter.convert(rawTracks, this::getVesselInfo);
if (filterByIntegration && integrationVesselService.isEnabled()) {
result = filterByIntegration(result);
}
result = applySimplificationPipeline(result);
log.debug("V2 API: Fetched {} compact tracks for haegu {} in last {} minutes",
@ -173,7 +157,6 @@ public class GisServiceV2 {
/**
* 영역별 선박 항적 조회 (V2 - CompactVesselTrack 반환)
* Semaphore 부하 제어 + 간소화 파이프라인 적용
*/
public List<CompactVesselTrack> getAreaTracks(String areaId, int minutes, boolean filterByIntegration) {
String queryId = "rest-area-" + areaId + "-" + UUID.randomUUID().toString().substring(0, 8);
@ -193,69 +176,61 @@ public class GisServiceV2 {
if (minutes <= 1440) {
String hourlySql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_hourly t
WHERE EXISTS (
SELECT 1 FROM signal.t_area_vessel_tracks a
WHERE a.sig_src_cd = t.sig_src_cd
AND a.target_id = t.target_id
WHERE a.mmsi = t.mmsi
AND a.area_id = '%s'
AND a.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
AND t.time_bucket < '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(areaId, startTime, startTime, currentHour);
rawTracks.addAll(jdbcTemplate.query(hourlySql, this::mapTrackResponse));
}
String recentSql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_area_vessel_tracks a
WHERE a.sig_src_cd = t.sig_src_cd
AND a.target_id = t.target_id
WHERE a.mmsi = t.mmsi
AND a.area_id = '%s'
AND a.time_bucket >= '%s'
)
AND t.time_bucket >= '%s'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(areaId, startTime, currentHour);
rawTracks.addAll(jdbcTemplate.query(recentSql, this::mapTrackResponse));
} else {
String sql = """
SELECT DISTINCT t.sig_src_cd, t.target_id, t.time_bucket,
SELECT DISTINCT t.mmsi, t.time_bucket,
public.ST_AsText(t.track_geom) as track_geom,
t.distance_nm, t.avg_speed, t.max_speed, t.point_count
FROM signal.t_vessel_tracks_5min t
WHERE EXISTS (
SELECT 1 FROM signal.t_area_vessel_tracks a
WHERE a.sig_src_cd = t.sig_src_cd
AND a.target_id = t.target_id
WHERE a.mmsi = t.mmsi
AND a.area_id = '%s'
AND a.time_bucket >= NOW() - INTERVAL '%d minutes'
)
AND t.time_bucket >= NOW() - INTERVAL '%d minutes'
ORDER BY t.sig_src_cd, t.target_id, t.time_bucket
ORDER BY t.mmsi, t.time_bucket
""".formatted(areaId, minutes, minutes);
rawTracks = jdbcTemplate.query(sql, this::mapTrackResponse);
}
List<CompactVesselTrack> result = TrackConverter.convert(rawTracks, this::getVesselInfo);
if (filterByIntegration && integrationVesselService.isEnabled()) {
result = filterByIntegration(result);
}
result = applySimplificationPipeline(result);
log.debug("V2 API: Fetched {} compact tracks for area {} in last {} minutes",
@ -275,7 +250,6 @@ public class GisServiceV2 {
/**
* 선박별 항적 조회 V2 (캐시 + Semaphore + 간소화)
* DailyTrackCacheManager를 활용한 캐시 우선 조회
*/
public List<CompactVesselTrack> getVesselTracksV2(VesselTracksRequest request) {
String queryId = "rest-vessels-" + UUID.randomUUID().toString().substring(0, 8);
@ -292,7 +266,6 @@ public class GisServiceV2 {
result = queryWithCache(request);
} else {
// 캐시 비활성화/미준비: 기존 GisService에 위임
result = gisService.getVesselTracks(request);
}
@ -306,7 +279,6 @@ public class GisServiceV2 {
} finally {
if (slotAcquired) {
activeQueryManager.releaseQuerySlot(queryId);
// Humongous 영역 조기 회수 (G1GC에서 8MB+ 객체는 Mixed GC에서만 회수)
if (activeQueryManager.isHeapPressureHigh()) {
System.gc();
}
@ -316,10 +288,6 @@ public class GisServiceV2 {
// 캐시 조회 로직
/**
* splitQueryRange를 사용한 캐시 우선 조회
* D-1부터 역순으로 캐시 존재 확인 캐시/DB 분리 조회 병합
*/
private List<CompactVesselTrack> queryWithCache(VesselTracksRequest request) {
LocalDateTime startTime = request.getStartTime();
LocalDateTime endTime = request.getEndTime();
@ -329,24 +297,20 @@ public class GisServiceV2 {
List<CompactVesselTrack> allTracks = new ArrayList<>();
// 요청 선박 ID 집합 구성
Set<String> requestedVesselKeys = request.getVessels().stream()
.map(v -> v.getSigSrcCd() + "_" + v.getTargetId())
.collect(Collectors.toSet());
Set<String> requestedMmsis = new HashSet<>(request.getVessels());
// 1. 캐시에서 조회 (캐시된 날짜)
if (split.hasCachedData()) {
List<CompactVesselTrack> cachedTracks =
dailyTrackCacheManager.getCachedTracksMultipleDays(split.getCachedDates());
// 요청 선박만 필터링 + 방어적 복사 (캐시 원본 보호: simplify가 in-place 수정하므로)
int totalCachedCount = cachedTracks.size();
List<CompactVesselTrack> filteredCached = cachedTracks.stream()
.filter(t -> requestedVesselKeys.contains(t.getSigSrcCd() + "_" + t.getTargetId()))
.filter(t -> requestedMmsis.contains(t.getVesselId()))
.map(t -> t.toBuilder().build())
.collect(Collectors.toList());
cachedTracks.clear(); // 메모리 즉시 해제: 캐시 참조 리스트
cachedTracks.clear();
allTracks.addAll(filteredCached);
log.debug("[CacheQuery] cached {} days -> {} tracks (filtered from {})",
@ -360,7 +324,6 @@ public class GisServiceV2 {
.startTime(dbRange.getStart())
.endTime(dbRange.getEnd())
.vessels(request.getVessels())
.isIntegration(request.getIsIntegration())
.build();
List<CompactVesselTrack> dbTracks = gisService.getVesselTracks(dbRequest);
allTracks.addAll(dbTracks);
@ -376,7 +339,6 @@ public class GisServiceV2 {
.startTime(today.getStart())
.endTime(today.getEnd())
.vessels(request.getVessels())
.isIntegration(request.getIsIntegration())
.build();
List<CompactVesselTrack> todayTracks = gisService.getVesselTracks(todayRequest);
allTracks.addAll(todayTracks);
@ -386,22 +348,13 @@ public class GisServiceV2 {
// 4. 동일 선박 병합 (캐시 + DB 결과)
List<CompactVesselTrack> merged = mergeTracksByVessel(allTracks);
allTracks.clear(); // 메모리 즉시 해제: 병합 완료 원본 리스트
// 5. 통합선박 필터링 (isIntegration이 null이거나 "1"이면 적용, "0" 미적용)
String isInteg = request.getIsIntegration();
if (!"0".equals(isInteg) && integrationVesselService.isEnabled()) {
merged = filterByIntegration(merged);
}
allTracks.clear();
return merged;
}
// Semaphore 슬롯 획득
/**
* REST V2 전용 슬롯 획득: 즉시 시도 blocking 대기 타임아웃 예외
*/
private boolean acquireSlotWithWait(String queryId) {
if (activeQueryManager.tryAcquireQuerySlotImmediate(queryId)) {
return true;
@ -422,20 +375,12 @@ public class GisServiceV2 {
// 간소화 파이프라인
/**
* 2단계 간소화 파이프라인
* [1단계] 표준 간소화 (DP + 거리/시간 + )
* [2단계] 포인트 버짓 강제 ( 포인트 상한 초과 균일 Nth-point)
*/
private List<CompactVesselTrack> applySimplificationPipeline(List<CompactVesselTrack> tracks) {
if (tracks == null || tracks.isEmpty()) {
return tracks;
}
// 1단계: 표준 간소화
tracks = cacheTrackSimplifier.simplify(tracks, CacheTrackSimplifier.SimplificationConfig.builder().build());
// 2단계: 포인트 버짓 강제
tracks = cacheTrackSimplifier.enforcePointBudget(tracks, maxTotalPoints);
return tracks;
@ -443,19 +388,14 @@ public class GisServiceV2 {
// 선박별 트랙 병합
/**
* 동일 선박(vesselId) 트랙을 병합
* 캐시와 DB에서 동일 선박 데이터가 있으므로 geometry/timestamps/speeds 합산
*/
private List<CompactVesselTrack> mergeTracksByVessel(List<CompactVesselTrack> tracks) {
if (tracks == null || tracks.size() <= 1) {
return tracks != null ? tracks : Collections.emptyList();
}
Map<String, List<CompactVesselTrack>> grouped = tracks.stream()
.collect(Collectors.groupingBy(t -> t.getSigSrcCd() + "_" + t.getTargetId()));
.collect(Collectors.groupingBy(CompactVesselTrack::getVesselId));
// 병합이 필요 없는 경우 (모든 선박이 1개씩만)
if (grouped.values().stream().allMatch(list -> list.size() == 1)) {
return tracks;
}
@ -470,7 +410,6 @@ public class GisServiceV2 {
continue;
}
// 번째 트랙을 기준으로 병합
CompactVesselTrack base = vesselTracks.get(0);
List<double[]> allGeometry = new ArrayList<>(base.getGeometry() != null ? base.getGeometry() : Collections.emptyList());
List<String> allTimestamps = new ArrayList<>(base.getTimestamps() != null ? base.getTimestamps() : Collections.emptyList());
@ -491,13 +430,10 @@ public class GisServiceV2 {
CompactVesselTrack mergedTrack = CompactVesselTrack.builder()
.vesselId(base.getVesselId())
.sigSrcCd(base.getSigSrcCd())
.targetId(base.getTargetId())
.nationalCode(base.getNationalCode())
.shipName(base.getShipName())
.shipType(base.getShipType())
.shipKindCode(base.getShipKindCode())
.integrationTargetId(base.getIntegrationTargetId())
.geometry(allGeometry)
.timestamps(allTimestamps)
.speeds(allSpeeds)
@ -514,12 +450,11 @@ public class GisServiceV2 {
return merged;
}
// 기존 유틸리티 메서드 (변경 없음)
// 유틸리티 메서드
private TrackResponse mapTrackResponse(ResultSet rs, int rowNum) throws SQLException {
return TrackResponse.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(rs.getObject("time_bucket", LocalDateTime.class))
.trackGeom(rs.getString("track_geom"))
.distanceNm(rs.getBigDecimal("distance_nm"))
@ -529,10 +464,8 @@ public class GisServiceV2 {
.build();
}
private TrackConverter.VesselInfo getVesselInfo(String sigSrcCd, String targetId) {
String cacheKey = sigSrcCd + "_" + targetId;
VesselInfoCache cached = vesselInfoCache.get(cacheKey);
private TrackConverter.VesselInfo getVesselInfo(String mmsi) {
VesselInfoCache cached = vesselInfoCache.get(mmsi);
if (cached != null && !cached.isExpired()) {
return new TrackConverter.VesselInfo(cached.shipName, cached.shipType);
}
@ -540,17 +473,17 @@ public class GisServiceV2 {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
try {
String sql = """
SELECT ship_nm, ship_ty
FROM signal.t_vessel_latest_position
WHERE sig_src_cd = ? AND target_id = ?
SELECT ship_nm, vessel_type
FROM signal.t_ais_position
WHERE mmsi = ?
LIMIT 1
""";
Map<String, Object> result = jdbcTemplate.queryForMap(sql, sigSrcCd, targetId);
Map<String, Object> result = jdbcTemplate.queryForMap(sql, mmsi);
String shipName = result.get("ship_nm") != null ? result.get("ship_nm").toString() : "-";
String shipType = result.get("ship_ty") != null ? result.get("ship_ty").toString() : "-";
String shipType = result.get("vessel_type") != null ? result.get("vessel_type").toString() : "-";
vesselInfoCache.put(cacheKey, new VesselInfoCache(shipName, shipType));
vesselInfoCache.put(mmsi, new VesselInfoCache(shipName, shipType));
return new TrackConverter.VesselInfo(shipName, shipType);
} catch (Exception e) {
@ -558,79 +491,6 @@ public class GisServiceV2 {
}
}
private List<CompactVesselTrack> filterByIntegration(List<CompactVesselTrack> tracks) {
if (tracks == null || tracks.isEmpty()) {
return tracks;
}
Map<String, IntegrationVessel> vesselIntegrations = new HashMap<>();
for (CompactVesselTrack track : tracks) {
String key = track.getSigSrcCd() + "_" + track.getTargetId();
if (!vesselIntegrations.containsKey(key)) {
IntegrationVessel integration = integrationVesselService.findByVessel(
track.getSigSrcCd(), track.getTargetId()
);
vesselIntegrations.put(key, integration);
}
}
Map<Long, List<CompactVesselTrack>> groupedByIntegration = new HashMap<>();
Map<Long, IntegrationVessel> integrationMap = new HashMap<>();
long tempSeq = -1;
for (CompactVesselTrack track : tracks) {
String key = track.getSigSrcCd() + "_" + track.getTargetId();
IntegrationVessel integration = vesselIntegrations.get(key);
Long seq;
if (integration != null) {
seq = integration.getIntgrSeq();
integrationMap.putIfAbsent(seq, integration);
} else {
seq = tempSeq--;
}
groupedByIntegration.computeIfAbsent(seq, k -> new ArrayList<>()).add(track);
}
List<CompactVesselTrack> result = new ArrayList<>();
for (Map.Entry<Long, List<CompactVesselTrack>> entry : groupedByIntegration.entrySet()) {
Long seq = entry.getKey();
List<CompactVesselTrack> groupTracks = entry.getValue();
if (seq < 0) {
CompactVesselTrack firstTrack = groupTracks.get(0);
String soloIntegrationId = IntegrationSignalConstants.generateSoloIntegrationId(
firstTrack.getSigSrcCd(),
firstTrack.getTargetId()
);
groupTracks.forEach(t -> t.setIntegrationTargetId(soloIntegrationId));
result.addAll(groupTracks);
} else {
IntegrationVessel integration = integrationMap.get(seq);
Set<String> existingSigSrcCds = groupTracks.stream()
.map(CompactVesselTrack::getSigSrcCd)
.collect(Collectors.toSet());
String selectedSigSrcCd = integrationVesselService.selectHighestPriorityFromExisting(existingSigSrcCds);
List<CompactVesselTrack> selectedTracks = groupTracks.stream()
.filter(t -> t.getSigSrcCd().equals(selectedSigSrcCd))
.collect(Collectors.toList());
String integrationId = integration.generateIntegrationId();
selectedTracks.forEach(t -> t.setIntegrationTargetId(integrationId));
result.addAll(selectedTracks);
}
}
log.info("[INTEGRATION_FILTER] V2 API - Filtered {} tracks to {} tracks", tracks.size(), result.size());
return result;
}
private static class VesselInfoCache {
String shipName;
String shipType;

파일 보기

@ -54,18 +54,8 @@ public class VesselContactService {
return buildEmptyResponse(request, targetDates, startMs);
}
// 3. sigSrcCd 필터
String targetSigSrcCd = request.getSigSrcCd();
Map<String, CompactVesselTrack> filtered = new HashMap<>();
for (Map.Entry<String, CompactVesselTrack> entry : mergedTracks.entrySet()) {
if (targetSigSrcCd.equals(entry.getValue().getSigSrcCd())) {
filtered.put(entry.getKey(), entry.getValue());
}
}
if (filtered.isEmpty()) {
return buildEmptyResponse(request, targetDates, startMs);
}
// 3. 병합된 트랙을 직접 사용 (단일 수집원이므로 필터 불필요)
Map<String, CompactVesselTrack> filtered = mergedTracks;
// 4. JTS Polygon + PreparedGeometry
VesselContactRequest.SearchPolygon poly = request.getPolygon();
@ -94,8 +84,8 @@ public class VesselContactService {
}
int totalVesselsInPolygon = insidePositions.size();
log.info("Vessel contact: sigSrcCd={}, filtered={}, insidePolygon={}, dates={}",
targetSigSrcCd, filtered.size(), totalVesselsInPolygon, targetDates.size());
log.info("Vessel contact: filtered={}, insidePolygon={}, dates={}",
filtered.size(), totalVesselsInPolygon, targetDates.size());
// 6. 시간 범위 겹침 사전 필터 + 선박 쌍별 접촉 판정
List<String> vesselIds = new ArrayList<>(insidePositions.keySet());
@ -336,7 +326,6 @@ public class VesselContactService {
.shipType(track.getShipType())
.shipKindCode(track.getShipKindCode())
.nationalCode(track.getNationalCode())
.integrationTargetId(track.getIntegrationTargetId())
.insidePolygonStartTs(startTs)
.insidePolygonEndTs(endTs)
.insidePolygonDurationMinutes(durationMin)

파일 보기

@ -154,8 +154,7 @@ public class SequentialPassageController {
String sql = String.format("""
WITH vessel_zones AS (
SELECT
sig_src_cd,
target_id,
mmsi,
COUNT(DISTINCT %s) as zone_count,
array_agg(DISTINCT %s ORDER BY %s) as visited_zones,
MIN(time_bucket) as first_seen,
@ -165,7 +164,7 @@ public class SequentialPassageController {
FROM signal.%s
WHERE time_bucket BETWEEN ? AND ?
AND %s = ANY(?)
GROUP BY sig_src_cd, target_id
GROUP BY mmsi
HAVING COUNT(DISTINCT %s) = ?
)
SELECT * FROM vessel_zones
@ -208,8 +207,7 @@ public class SequentialPassageController {
private SequentialPassageResponse.VesselPassage buildVesselPassage(
Map<String, Object> row, SequentialPassageRequest request) {
String sigSrcCd = (String) row.get("sig_src_cd");
String targetId = (String) row.get("target_id");
String mmsi = (String) row.get("mmsi");
// 구역별 통과 정보 구성
List<SequentialPassageResponse.ZonePassage> zonePassages = new ArrayList<>();
@ -233,11 +231,10 @@ public class SequentialPassageController {
}
// 선박 정보 조회 (캐시 활용 가능)
SequentialPassageResponse.VesselInfo vesselInfo = getVesselInfo(sigSrcCd, targetId);
SequentialPassageResponse.VesselInfo vesselInfo = getVesselInfo(mmsi);
return SequentialPassageResponse.VesselPassage.builder()
.sigSrcCd(sigSrcCd)
.targetId(targetId)
.mmsi(mmsi)
.vesselInfo(vesselInfo)
.zonePassages(zonePassages)
.build();
@ -251,18 +248,18 @@ public class SequentialPassageController {
}
}
private SequentialPassageResponse.VesselInfo getVesselInfo(String sigSrcCd, String targetId) {
private SequentialPassageResponse.VesselInfo getVesselInfo(String mmsi) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
String sql = """
SELECT ship_nm as ship_name, ship_ty as ship_type
FROM signal.t_vessel_latest_position
WHERE sig_src_cd = ? AND target_id = ?
SELECT ship_nm as ship_name, vessel_type as ship_type
FROM signal.t_ais_position
WHERE mmsi = ?
LIMIT 1
""";
try {
Map<String, Object> result = jdbcTemplate.queryForMap(sql, sigSrcCd, targetId);
Map<String, Object> result = jdbcTemplate.queryForMap(sql, mmsi);
return SequentialPassageResponse.VesselInfo.builder()
.shipName(result.get("ship_name") != null ? (String) result.get("ship_name") : null)
.shipType(result.get("ship_type") != null ? (String) result.get("ship_type") : null)
@ -283,7 +280,7 @@ public class SequentialPassageController {
String sql = """
SELECT
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as unique_vessels,
COUNT(DISTINCT mmsi) as unique_vessels,
COUNT(*) as total_passages,
SUM(distance_nm) as total_distance,
AVG(avg_speed) as avg_speed,

Some files were not shown because too many files have changed in this diff Show More