Merge pull request 'release: SNP API 전환 + 인메모리 캐시 최적화 + 다계층 캐시 조회 통합' (#7) from develop into main

Reviewed-on: #7
This commit is contained in:
htlee 2026-02-19 14:26:30 +09:00
커밋 b9ace1681c
206개의 변경된 파일15684개의 추가작업 그리고 8042개의 파일을 삭제

파일 보기

@ -0,0 +1,70 @@
# /analyze-batch - 배치 작업 분석
Spring Batch 작업 관련 코드를 분석하고 진단합니다.
## 분석 대상
### 1. Job 구성 분석
다음 파일들을 확인하세요:
- `src/main/java/**/config/` - 배치 설정
- `src/main/java/**/job/` - Job 정의
- Job, Step, Reader, Processor, Writer 구성
### 2. 스케줄링 설정
- @Scheduled 어노테이션 사용 현황
- Quartz 또는 다른 스케줄러 설정
- Cron 표현식 분석
### 3. 데이터 처리 패턴
- ItemReader 구현 (DB, File, API 등)
- ItemProcessor 로직
- ItemWriter 구현 (bulk insert, 파일 출력 등)
- Chunk 크기 설정
### 4. 에러 처리
- Skip 정책
- Retry 정책
- Listener 구현 (JobExecutionListener, StepExecutionListener)
### 5. 성능 분석
- Chunk 크기 적절성
- 병렬 처리 설정 (Partitioning, Multi-threading)
- 커넥션 풀 설정
## 출력 형식
```markdown
## 배치 작업 분석 결과
### Job 목록
| Job 이름 | Step 수 | 스케줄 | 설명 |
|----------|---------|--------|------|
| xxxJob | 3 | 0 0 * * * | ... |
### 데이터 흐름
```
[Reader] → [Processor] → [Writer]
↓ ↓ ↓
[데이터소스] [변환로직] [목적지]
```
### 에러 처리 설정
- Skip 정책: [설정 내용]
- Retry 정책: [설정 내용]
### 성능 설정
- Chunk 크기: [값]
- 병렬 처리: [설정 여부]
### 개선 제안
1. [제안1]
2. [제안2]
```
## 인자
`$ARGUMENTS`: 특정 Job 이름이나 키워드
예시:
- `/analyze-batch` - 전체 분석
- `/analyze-batch signal` - 신호 관련 배치만 분석

파일 보기

@ -0,0 +1,64 @@
# /build-check - 빌드 및 테스트 체크
Maven 프로젝트의 빌드 상태와 테스트 결과를 점검합니다.
## 실행 작업
### 1. 컴파일 체크
```bash
mvn clean compile -DskipTests
```
- 컴파일 에러 확인
- 의존성 문제 확인
### 2. 테스트 실행 (선택적)
```bash
mvn test
```
- 단위 테스트 결과
- 실패한 테스트 분석
### 3. 패키지 빌드 (선택적)
```bash
mvn package -DskipTests
```
- JAR/WAR 생성 확인
- 빌드 아티팩트 확인
## 출력 형식
```markdown
## Build Check 결과
### 컴파일
- 상태: [성공/실패]
- 에러 (있다면): [에러 내용]
### 테스트
- 상태: [성공/실패/스킵]
- 통과: [N]개
- 실패: [N]개
- 실패한 테스트 (있다면):
- [테스트명]: [실패 원인]
### 패키지
- 상태: [성공/실패/스킵]
- 아티팩트: [파일 경로]
### 권장 조치
1. [조치1]
2. [조치2]
```
## 인자
`$ARGUMENTS`: 옵션 지정
- `compile` - 컴파일만
- `test` - 컴파일 + 테스트
- `package` - 전체 패키지 빌드
- (없음) - 컴파일만 (기본값)
예시:
- `/build-check` - 컴파일 체크
- `/build-check test` - 테스트 포함
- `/build-check package` - 전체 빌드

파일 보기

@ -0,0 +1,66 @@
# /clarify - 요구사항 명확화
새로운 기능이나 버그 수정 요청 시 요구사항을 명확히 하기 위한 질문을 생성합니다.
## 사용 시점
- 사용자 요청이 모호할 때
- 여러 구현 방법이 가능할 때
- 비즈니스 요구사항 확인이 필요할 때
## 질문 카테고리
### 1. 기능 범위
- 이 기능의 정확한 범위는 무엇인가요?
- 어떤 서비스/컴포넌트가 이 기능을 사용하나요?
- 기존 기능과의 관계는 어떻게 되나요?
### 2. API 설계
- REST API 엔드포인트 설계가 필요한가요?
- 요청/응답 형식은 어떻게 되나요?
- 기존 API 패턴을 따르나요?
### 3. 데이터
- 어떤 데이터가 필요한가요?
- 데이터 소스는 무엇인가요? (DB, 외부 API, 파일)
- 데이터 영속성이 필요한가요?
### 4. 에러 처리
- 예상되는 에러 케이스는 무엇인가요?
- 에러 시 어떻게 처리해야 하나요? (재시도, 로깅, 알림)
### 5. 성능
- 예상 데이터 양은 얼마나 되나요?
- 배치 처리가 필요한가요?
- 성능 요구사항이 있나요?
### 6. 배포/환경
- 특정 환경(dev/qa/prod)에서만 동작해야 하나요?
- 프로파일별 설정이 필요한가요?
## 출력 형식
```markdown
## 요구사항 명확화 질문
### 기능 범위
1. [질문1]
2. [질문2]
### API 설계
1. [질문1]
### 데이터
1. [질문1]
...
---
답변을 바탕으로 구현 계획을 수립하겠습니다.
```
## 인자
`$ARGUMENTS`: 사용자의 요청 내용을 요약해서 입력
예: `/clarify 선박 위치 배치 저장 기능`

파일 보기

@ -0,0 +1,72 @@
# /perf-check - 성능 체크 명령어
Spring Boot 배치 애플리케이션의 성능 관련 이슈를 점검합니다.
## 분석 영역
### 1. 데이터베이스 성능
- JPA/MyBatis 쿼리 분석
- N+1 문제 확인
- 인덱스 활용 여부
- Batch Insert/Update 적용 여부
### 2. 메모리 관리
- 대량 데이터 처리 시 메모리 사용
- Stream 활용 여부
- 페이징 처리 적용 여부
### 3. 배치 처리
- Chunk 크기 적절성
- 병렬 처리 설정
- Reader/Writer 최적화
### 4. 커넥션 관리
- 커넥션 풀 설정 (HikariCP)
- 트랜잭션 범위 적절성
- 커넥션 누수 가능성
### 5. 외부 통신
- HTTP Client 설정 (타임아웃, 커넥션 풀)
- 재시도 정책
- Circuit Breaker 패턴 적용
## 출력 형식
```markdown
## 성능 체크 결과
### 데이터베이스
- [ ] N+1 문제: [발견 여부]
- [ ] Batch 처리: [적용 현황]
- [ ] 인덱스 활용: [상태]
### 메모리
- [ ] 대량 데이터 처리: [상태]
- [ ] Stream 활용: [적용 여부]
- [ ] 페이징: [적용 여부]
### 배치 처리
- [ ] Chunk 크기: [값 및 적절성]
- [ ] 병렬 처리: [설정 상태]
### 커넥션 관리
- [ ] 풀 설정: [상태]
- [ ] 트랜잭션 범위: [적절성]
### 외부 통신
- [ ] 타임아웃 설정: [상태]
- [ ] 재시도 정책: [적용 여부]
### 우선순위 개선 항목
1. [항목1] - 예상 효과: [설명]
2. [항목2] - 예상 효과: [설명]
```
## 인자
`$ARGUMENTS`: 특정 영역만 체크 (db, memory, batch, connection, external)
예시:
- `/perf-check` - 전체 체크
- `/perf-check db` - 데이터베이스만 체크
- `/perf-check batch` - 배치 처리만 체크

65
.claude/commands/wrap.md Normal file
파일 보기

@ -0,0 +1,65 @@
# /wrap - Session Wrap-up Command
세션 종료 시 다음 작업들을 병렬로 수행하는 명령어입니다.
## 실행할 작업들 (병렬 에이전트)
### 1. 문서 업데이트 체크
다음 파일들의 업데이트 필요 여부를 확인하세요:
- `CLAUDE.md`: 새로운 패턴이나 컨벤션이 발견되었는지
- 이번 세션에서 중요한 기술 결정이 있었는지
### 2. 반복 패턴 분석
이번 세션에서 반복적으로 수행한 작업이 있는지 분석하세요:
- 비슷한 코드 패턴을 여러 번 작성했는지
- 동일한 명령어를 반복 실행했는지
- 자동화할 수 있는 워크플로우가 있는지
발견된 패턴은 `/commands`로 자동화를 제안하세요.
### 3. 학습 내용 추출
이번 세션에서 배운 내용을 정리하세요:
- 새로 발견한 코드베이스의 특성
- 해결한 문제와 그 해결 방법
- 앞으로 주의해야 할 점
### 4. 미완성 작업 정리
완료하지 못한 작업이 있다면 정리하세요:
- TODO 리스트에 남은 항목
- 다음 세션에서 계속해야 할 작업
- 블로커나 의존성 이슈
### 5. 코드 품질 체크
이번 세션에서 수정한 파일들에 대해:
- 컴파일 에러가 없는지 (`mvn compile`)
- 테스트가 통과하는지 (`mvn test`)
## 출력 형식
```markdown
## Session Summary
### 완료한 작업
- [작업1]
- [작업2]
### 문서 업데이트 필요
- [ ] CLAUDE.md: [업데이트 내용]
### 발견된 패턴 (자동화 제안)
- [패턴]: [자동화 방법]
### 학습 내용
- [내용1]
- [내용2]
### 미완성 작업
- [ ] [작업1]
- [ ] [작업2]
### 코드 품질
- Compile: [결과]
- Test: [결과]
```
이 명령어를 실행할 때 Task 도구를 사용하여 여러 에이전트를 **병렬로** 실행하세요.

파일 보기

@ -0,0 +1,73 @@
# Java 코드 스타일 규칙
## 일반
- Java 17+ 문법 사용 (record, sealed class, pattern matching, text block 활용)
- 들여쓰기: 4 spaces (탭 사용 금지)
- 줄 길이: 120자 이하
- 파일 끝에 빈 줄 추가
## 클래스 구조
클래스 내 멤버 순서:
1. static 상수 (public → private)
2. 인스턴스 필드 (public → private)
3. 생성자
4. public 메서드
5. protected/package-private 메서드
6. private 메서드
7. inner class/enum
## Spring Boot 규칙
### 계층 구조
- Controller → Service → Repository 단방향 의존
- Controller에 비즈니스 로직 금지 (요청/응답 변환만)
- Service 계층 간 순환 참조 금지
- Repository에 비즈니스 로직 금지
### DTO와 Entity 분리
- API 요청/응답에 Entity 직접 사용 금지
- DTO는 record 또는 불변 클래스로 작성
- DTO ↔ Entity 변환은 매퍼 클래스 또는 팩토리 메서드 사용
### 의존성 주입
- 생성자 주입 사용 (필드 주입 `@Autowired` 사용 금지)
- 단일 생성자는 `@Autowired` 어노테이션 생략
- Lombok `@RequiredArgsConstructor` 사용 가능
### 트랜잭션
- `@Transactional` 범위 최소화
- 읽기 전용: `@Transactional(readOnly = true)`
- Service 메서드 레벨에 적용 (클래스 레벨 지양)
## Lombok 규칙
- `@Getter`, `@Setter` 허용 (Entity에서 Setter는 지양)
- `@Builder` 허용
- `@Data` 사용 금지 (명시적으로 필요한 어노테이션만)
- `@AllArgsConstructor` 단독 사용 금지 (`@Builder`와 함께 사용)
## 로깅
- `@Slf4j` (Lombok) 로거 사용
- SLF4J `{}` 플레이스홀더에 printf 포맷 사용 금지 (`{:.1f}`, `{:d}`, `{%s}` 등)
- 숫자 포맷이 필요하면 `String.format()`으로 변환 후 전달
```java
// 잘못됨
log.info("처리율: {:.1f}%", rate);
// 올바름
log.info("처리율: {}%", String.format("%.1f", rate));
```
- 예외 로깅 시 예외 객체는 마지막 인자로 전달 (플레이스홀더 불필요)
```java
log.error("처리 실패: {}", id, exception);
```
## 예외 처리
- 비즈니스 예외는 커스텀 Exception 클래스 정의
- `@ControllerAdvice`로 전역 예외 처리
- 예외 메시지에 컨텍스트 정보 포함
- catch 블록에서 예외 무시 금지 (`// ignore` 금지)
## 기타
- `Optional`은 반환 타입으로만 사용 (필드, 파라미터에 사용 금지)
- `null` 반환보다 빈 컬렉션 또는 `Optional` 반환
- Stream API 활용 (단, 3단계 이상 체이닝은 메서드 추출)
- 하드코딩된 문자열/숫자 금지 → 상수 또는 설정값으로 추출

파일 보기

@ -0,0 +1,84 @@
# Git 워크플로우 규칙
## 브랜치 전략
### 브랜치 구조
```
main ← 배포 가능한 안정 브랜치 (보호됨)
└── develop ← 개발 통합 브랜치
├── feature/ISSUE-123-기능설명
├── bugfix/ISSUE-456-버그설명
└── hotfix/ISSUE-789-긴급수정
```
### 브랜치 네이밍
- feature 브랜치: `feature/ISSUE-번호-간단설명` (예: `feature/ISSUE-42-user-login`)
- bugfix 브랜치: `bugfix/ISSUE-번호-간단설명`
- hotfix 브랜치: `hotfix/ISSUE-번호-간단설명`
- 이슈 번호가 없는 경우: `feature/간단설명` (예: `feature/add-swagger-docs`)
### 브랜치 규칙
- main, develop 브랜치에 직접 커밋/푸시 금지
- feature 브랜치는 develop에서 분기
- hotfix 브랜치는 main에서 분기
- 머지는 반드시 MR(Merge Request)을 통해 수행
## 커밋 메시지 규칙
### Conventional Commits 형식
```
type(scope): subject
body (선택)
footer (선택)
```
### type (필수)
| type | 설명 |
|------|------|
| feat | 새로운 기능 추가 |
| fix | 버그 수정 |
| docs | 문서 변경 |
| style | 코드 포맷팅 (기능 변경 없음) |
| refactor | 리팩토링 (기능 변경 없음) |
| test | 테스트 추가/수정 |
| chore | 빌드, 설정 변경 |
| ci | CI/CD 설정 변경 |
| perf | 성능 개선 |
### scope (선택)
- 변경 범위를 나타내는 짧은 단어
- 한국어, 영어 모두 허용 (예: `feat(인증): 로그인 기능`, `fix(auth): token refresh`)
### subject (필수)
- 변경 내용을 간결하게 설명
- 한국어, 영어 모두 허용
- 72자 이내
- 마침표(.) 없이 끝냄
### 예시
```
feat(auth): JWT 기반 로그인 구현
fix(배치): 야간 배치 타임아웃 수정
docs: README에 빌드 방법 추가
refactor(user-service): 중복 로직 추출
test(결제): 환불 로직 단위 테스트 추가
chore: Gradle 의존성 버전 업데이트
```
## MR(Merge Request) 규칙
### MR 생성
- 제목: 커밋 메시지와 동일한 Conventional Commits 형식
- 본문: 변경 내용 요약, 테스트 방법, 관련 이슈 번호
- 라벨: 적절한 라벨 부착 (feature, bugfix, hotfix 등)
### MR 리뷰
- 최소 1명의 리뷰어 승인 필수
- CI 검증 통과 필수 (설정된 경우)
- 리뷰 코멘트 모두 해결 후 머지
### MR 머지
- Squash Merge 권장 (깔끔한 히스토리)
- 머지 후 소스 브랜치 삭제

60
.claude/rules/naming.md Normal file
파일 보기

@ -0,0 +1,60 @@
# Java 네이밍 규칙
## 패키지
- 모두 소문자, 단수형
- 도메인 역순: `com.gcsc.프로젝트명.모듈`
- 예: `com.gcsc.batch.scheduler`, `com.gcsc.api.auth`
## 클래스
- PascalCase
- 명사 또는 명사구
- 접미사로 역할 표시:
| 계층 | 접미사 | 예시 |
|------|--------|------|
| Controller | `Controller` | `UserController` |
| Service | `Service` | `UserService` |
| Service 구현 | `ServiceImpl` | `UserServiceImpl` (인터페이스 있을 때만) |
| Repository | `Repository` | `UserRepository` |
| Entity | (없음) | `User`, `ShipRoute` |
| DTO 요청 | `Request` | `CreateUserRequest` |
| DTO 응답 | `Response` | `UserResponse` |
| 설정 | `Config` | `SecurityConfig` |
| 예외 | `Exception` | `UserNotFoundException` |
| Enum | (없음) | `UserStatus`, `ShipType` |
| Mapper | `Mapper` | `UserMapper` |
## 메서드
- camelCase
- 동사로 시작
- CRUD 패턴:
| 작업 | Controller | Service | Repository |
|------|-----------|---------|------------|
| 조회(단건) | `getUser()` | `getUser()` | `findById()` |
| 조회(목록) | `getUsers()` | `getUsers()` | `findAll()` |
| 생성 | `createUser()` | `createUser()` | `save()` |
| 수정 | `updateUser()` | `updateUser()` | `save()` |
| 삭제 | `deleteUser()` | `deleteUser()` | `deleteById()` |
| 존재확인 | - | `existsUser()` | `existsById()` |
## 변수
- camelCase
- 의미 있는 이름 (단일 문자 변수 금지, 루프 인덱스 `i, j, k` 예외)
- boolean: `is`, `has`, `can`, `should` 접두사
- 예: `isActive`, `hasPermission`, `canDelete`
## 상수
- UPPER_SNAKE_CASE
- 예: `MAX_RETRY_COUNT`, `DEFAULT_PAGE_SIZE`
## 테스트
- 클래스: `{대상클래스}Test` (예: `UserServiceTest`)
- 메서드: `{메서드명}_{시나리오}_{기대결과}` 또는 한국어 `@DisplayName`
- 예: `createUser_withDuplicateEmail_throwsException()`
- 예: `@DisplayName("중복 이메일로 생성 시 예외 발생")`
## 파일/디렉토리
- Java 파일: PascalCase (클래스명과 동일)
- 리소스 파일: kebab-case (예: `application-local.yml`)
- SQL 파일: `V{번호}__{설명}.sql` (Flyway) 또는 kebab-case

파일 보기

@ -0,0 +1,34 @@
# 팀 정책 (Team Policy)
이 규칙은 조직 전체에 적용되는 필수 정책입니다.
프로젝트별 `.claude/rules/`에 추가 규칙을 정의할 수 있으나, 이 정책을 위반할 수 없습니다.
## 보안 정책
### 금지 행위
- `.env`, `.env.*`, `secrets/` 파일 읽기 및 내용 출력 금지
- 비밀번호, API 키, 토큰 등 민감 정보를 코드에 하드코딩 금지
- `git push --force`, `git reset --hard`, `git clean -fd` 실행 금지
- `rm -rf /`, `rm -rf ~`, `rm -rf .git` 등 파괴적 명령 실행 금지
- main/develop 브랜치에 직접 push 금지 (MR을 통해서만 머지)
### 인증 정보 관리
- 환경변수 또는 외부 설정 파일(`.env`, `application-local.yml`)로 관리
- 설정 파일은 `.gitignore`에 반드시 포함
- 예시 파일(`.env.example`, `application.yml.example`)만 커밋
## 코드 품질 정책
### 필수 검증
- 커밋 전 빌드(컴파일) 성공 확인
- 린트 경고 0개 유지 (CI에서도 검증)
- 테스트 코드가 있는 프로젝트는 테스트 통과 필수
### 코드 리뷰
- main 브랜치 머지 시 최소 1명 리뷰 필수
- 리뷰어 승인 없이 머지 불가
## 문서화 정책
- 공개 API(controller endpoint)에는 반드시 설명 주석 작성
- 복잡한 비즈니스 로직에는 의도를 설명하는 주석 작성
- README.md에 프로젝트 빌드/실행 방법 유지

62
.claude/rules/testing.md Normal file
파일 보기

@ -0,0 +1,62 @@
# Java 테스트 규칙
## 테스트 프레임워크
- JUnit 5 + AssertJ 조합
- Mockito로 의존성 모킹
- Spring Boot Test (`@SpringBootTest`) 는 통합 테스트에만 사용
## 테스트 구조
### 단위 테스트 (Unit Test)
- Service, Util, Domain 로직 테스트
- Spring 컨텍스트 로딩 없이 (`@ExtendWith(MockitoExtension.class)`)
- 외부 의존성은 Mockito로 모킹
```java
@ExtendWith(MockitoExtension.class)
class UserServiceTest {
@InjectMocks
private UserService userService;
@Mock
private UserRepository userRepository;
@Test
@DisplayName("사용자 생성 시 정상 저장")
void createUser_withValidInput_savesUser() {
// given
// when
// then
}
}
```
### 통합 테스트 (Integration Test)
- Controller 테스트: `@WebMvcTest` + `MockMvc`
- Repository 테스트: `@DataJpaTest`
- 전체 플로우: `@SpringBootTest` (최소화)
### 테스트 패턴
- **Given-When-Then** 구조 사용
- 각 섹션을 주석으로 구분
- 하나의 테스트에 하나의 검증 원칙 (가능한 범위에서)
## 테스트 네이밍
- 메서드명: `{메서드}_{시나리오}_{기대결과}` 패턴
- `@DisplayName`: 한국어로 테스트 의도 설명
## 테스트 커버리지
- 새로 작성하는 Service 클래스: 핵심 비즈니스 로직 테스트 필수
- 기존 코드 수정 시: 수정된 로직에 대한 테스트 추가 권장
- Controller: 주요 API endpoint 통합 테스트 권장
## 테스트 데이터
- 테스트 데이터는 테스트 메서드 내부 또는 `@BeforeEach`에서 생성
- 공통 테스트 데이터는 TestFixture 클래스로 분리
- 실제 DB 연결 필요 시 H2 인메모리 또는 Testcontainers 사용
## 금지 사항
- `@SpringBootTest`를 단위 테스트에 사용 금지
- 테스트 간 상태 공유 금지
- `Thread.sleep()` 사용 금지 → `Awaitility` 사용
- 실제 외부 API 호출 금지 → WireMock 또는 Mockito 사용

파일 보기

@ -0,0 +1,14 @@
#!/bin/bash
INPUT=$(cat)
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('tool_input',{}).get('command',''))" 2>/dev/null || echo "")
if echo "$COMMAND" | grep -qE 'git commit'; then
cat <<RESP
{
"hookSpecificOutput": {
"additionalContext": "커밋이 감지되었습니다. 다음을 수행하세요:\n1. docs/CHANGELOG.md에 변경 내역 추가\n2. memory/project-snapshot.md에서 변경된 부분 업데이트\n3. memory/project-history.md에 이번 변경사항 추가\n4. API 인터페이스 변경 시 memory/api-types.md 갱신\n5. 프로젝트에 lint 설정이 있다면 lint 결과를 확인하고 문제를 수정"
}
}
RESP
else
echo '{}'
fi

파일 보기

@ -0,0 +1,23 @@
#!/bin/bash
INPUT=$(cat)
CWD=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('cwd',''))" 2>/dev/null || echo "")
if [ -z "$CWD" ]; then
CWD=$(pwd)
fi
PROJECT_HASH=$(echo "$CWD" | sed 's|/|-|g')
MEMORY_DIR="$HOME/.claude/projects/$PROJECT_HASH/memory"
CONTEXT=""
if [ -f "$MEMORY_DIR/MEMORY.md" ]; then
SUMMARY=$(head -100 "$MEMORY_DIR/MEMORY.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="컨텍스트가 압축되었습니다.\\n\\n[세션 요약]\\n${SUMMARY}"
fi
if [ -f "$MEMORY_DIR/project-snapshot.md" ]; then
SNAP=$(head -50 "$MEMORY_DIR/project-snapshot.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="${CONTEXT}\\n\\n[프로젝트 최신 상태]\\n${SNAP}"
fi
if [ -n "$CONTEXT" ]; then
CONTEXT="${CONTEXT}\\n\\n위 내용을 참고하여 작업을 이어가세요. 상세 내용은 memory/ 디렉토리의 각 파일을 참조하세요."
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"${CONTEXT}\"}}"
else
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"컨텍스트가 압축되었습니다. memory 파일이 없으므로 사용자에게 이전 작업 내용을 확인하세요.\"}}"
fi

파일 보기

@ -0,0 +1,8 @@
#!/bin/bash
# PreCompact hook: systemMessage만 지원 (hookSpecificOutput 사용 불가)
INPUT=$(cat)
cat <<RESP
{
"systemMessage": "컨텍스트 압축이 시작됩니다. 반드시 다음을 수행하세요:\n\n1. memory/MEMORY.md - 핵심 작업 상태 갱신 (200줄 이내)\n2. memory/project-snapshot.md - 변경된 패키지/타입 정보 업데이트\n3. memory/project-history.md - 이번 세션 변경사항 추가\n4. memory/api-types.md - API 인터페이스 변경이 있었다면 갱신\n5. 미완료 작업이 있다면 TodoWrite에 남기고 memory에도 기록"
}
RESP

84
.claude/settings.json Normal file
파일 보기

@ -0,0 +1,84 @@
{
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"permissions": {
"allow": [
"Bash(./mvnw *)",
"Bash(mvn *)",
"Bash(java -version)",
"Bash(java -jar *)",
"Bash(git status)",
"Bash(git diff *)",
"Bash(git log *)",
"Bash(git branch *)",
"Bash(git checkout *)",
"Bash(git add *)",
"Bash(git commit *)",
"Bash(git pull *)",
"Bash(git fetch *)",
"Bash(git merge *)",
"Bash(git stash *)",
"Bash(git remote *)",
"Bash(git config *)",
"Bash(git rev-parse *)",
"Bash(git show *)",
"Bash(git tag *)",
"Bash(curl -s *)",
"Bash(sdk *)"
],
"deny": [
"Bash(git push --force*)",
"Bash(git push -f *)",
"Bash(git push origin --force*)",
"Bash(git reset --hard*)",
"Bash(git clean -fd*)",
"Bash(git checkout -- .)",
"Bash(git restore .)",
"Bash(rm -rf /)",
"Bash(rm -rf ~)",
"Bash(rm -rf .git*)",
"Bash(rm -rf /*)",
"Read(./**/.env)",
"Read(./**/.env.*)",
"Read(./**/secrets/**)",
"Read(./**/application-local.yml)",
"Read(./**/application-local.properties)"
]
},
"hooks": {
"SessionStart": [
{
"matcher": "compact",
"hooks": [
{
"type": "command",
"command": "bash .claude/scripts/on-post-compact.sh",
"timeout": 10
}
]
}
],
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "bash .claude/scripts/on-pre-compact.sh",
"timeout": 30
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash .claude/scripts/on-commit.sh",
"timeout": 15
}
]
}
]
}
}

파일 보기

@ -0,0 +1,65 @@
---
name: create-mr
description: 현재 브랜치에서 Gitea MR(Merge Request)을 생성합니다
allowed-tools: "Bash, Read, Grep"
argument-hint: "[target-branch: develop|main] (기본: develop)"
---
현재 브랜치의 변경 사항을 기반으로 Gitea에 MR을 생성합니다.
타겟 브랜치: $ARGUMENTS (기본: develop)
## 수행 단계
### 1. 사전 검증
- 현재 브랜치가 main/develop이 아닌지 확인
- 커밋되지 않은 변경 사항 확인 (있으면 경고)
- 리모트에 현재 브랜치가 push되어 있는지 확인 (안 되어 있으면 push)
### 2. 변경 내역 분석
```bash
git log develop..HEAD --oneline
git diff develop..HEAD --stat
```
- 커밋 목록과 변경된 파일 목록 수집
- 주요 변경 사항 요약 작성
### 3. MR 정보 구성
- **제목**: 브랜치의 첫 커밋 메시지 또는 브랜치명에서 추출
- `feature/ISSUE-42-user-login``feat: ISSUE-42 user-login`
- **본문**:
```markdown
## 변경 사항
- (커밋 기반 자동 생성)
## 관련 이슈
- closes #이슈번호 (브랜치명에서 추출)
## 테스트
- [ ] 빌드 성공 확인
- [ ] 기존 테스트 통과
```
### 4. Gitea API로 MR 생성
```bash
# Gitea remote URL에서 owner/repo 추출
REMOTE_URL=$(git remote get-url origin)
# Gitea API 호출
curl -X POST "GITEA_URL/api/v1/repos/{owner}/{repo}/pulls" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"title": "MR 제목",
"body": "MR 본문",
"head": "현재브랜치",
"base": "타겟브랜치"
}'
```
### 5. 결과 출력
- MR URL 출력
- 리뷰어 지정 안내
- 다음 단계: 리뷰 대기 → 승인 → 머지
## 필요 환경변수
- `GITEA_TOKEN`: Gitea API 접근 토큰 (없으면 안내)

파일 보기

@ -0,0 +1,49 @@
---
name: fix-issue
description: Gitea 이슈를 분석하고 수정 브랜치를 생성합니다
allowed-tools: "Bash, Read, Write, Edit, Glob, Grep"
argument-hint: "<issue-number>"
---
Gitea 이슈 #$ARGUMENTS 를 분석하고 수정 작업을 시작합니다.
## 수행 단계
### 1. 이슈 조회
```bash
curl -s "GITEA_URL/api/v1/repos/{owner}/{repo}/issues/$ARGUMENTS" \
-H "Authorization: token ${GITEA_TOKEN}"
```
- 이슈 제목, 본문, 라벨, 담당자 정보 확인
- 이슈 내용을 사용자에게 요약하여 보여줌
### 2. 브랜치 생성
이슈 라벨에 따라 브랜치 타입 결정:
- `bug` 라벨 → `bugfix/ISSUE-번호-설명`
- 그 외 → `feature/ISSUE-번호-설명`
- 긴급 → `hotfix/ISSUE-번호-설명`
```bash
git checkout develop
git pull origin develop
git checkout -b {type}/ISSUE-{number}-{slug}
```
### 3. 이슈 분석
이슈 내용을 바탕으로:
- 관련 파일 탐색 (Grep, Glob 활용)
- 영향 범위 파악
- 수정 방향 제안
### 4. 수정 계획 제시
사용자에게 수정 계획을 보여주고 승인을 받은 후 작업 진행:
- 수정할 파일 목록
- 변경 내용 요약
- 예상 영향
### 5. 작업 완료 후
- 변경 사항 요약
- `/create-mr` 실행 안내
## 필요 환경변수
- `GITEA_TOKEN`: Gitea API 접근 토큰

파일 보기

@ -0,0 +1,235 @@
---
name: init-project
description: 팀 표준 워크플로우로 프로젝트를 초기화합니다
allowed-tools: "Bash, Read, Write, Edit, Glob, Grep"
argument-hint: "[project-type: java-maven|java-gradle|react-ts|auto]"
---
팀 표준 워크플로우에 따라 프로젝트를 초기화합니다.
프로젝트 타입: $ARGUMENTS (기본: auto — 자동 감지)
## 프로젝트 타입 자동 감지
$ARGUMENTS가 "auto"이거나 비어있으면 다음 순서로 감지:
1. `pom.xml` 존재 → **java-maven**
2. `build.gradle` 또는 `build.gradle.kts` 존재 → **java-gradle**
3. `package.json` + `tsconfig.json` 존재 → **react-ts**
4. 감지 실패 → 사용자에게 타입 선택 요청
## 수행 단계
### 1. 프로젝트 분석
- 빌드 파일, 설정 파일, 디렉토리 구조 파악
- 사용 중인 프레임워크, 라이브러리 감지
- 기존 `.claude/` 디렉토리 존재 여부 확인
- eslint, prettier, checkstyle, spotless 등 lint 도구 설치 여부 확인
### 2. CLAUDE.md 생성
프로젝트 루트에 CLAUDE.md를 생성하고 다음 내용 포함:
- 프로젝트 개요 (이름, 타입, 주요 기술 스택)
- 빌드/실행 명령어 (감지된 빌드 도구 기반)
- 테스트 실행 명령어
- lint 실행 명령어 (감지된 도구 기반)
- 프로젝트 디렉토리 구조 요약
- 팀 컨벤션 참조 (`.claude/rules/` 안내)
### 3. .claude/ 디렉토리 구성
이미 팀 표준 파일이 존재하면 건너뜀. 없는 경우:
- `.claude/settings.json` — 프로젝트 타입별 표준 권한 설정 + hooks 섹션 (4단계 참조)
- `.claude/rules/` — 팀 규칙 파일 (team-policy, git-workflow, code-style, naming, testing)
- `.claude/skills/` — 팀 스킬 (create-mr, fix-issue, sync-team-workflow, init-project)
### 4. Hook 스크립트 생성
`.claude/scripts/` 디렉토리를 생성하고 다음 스크립트 파일 생성 (chmod +x):
- `.claude/scripts/on-pre-compact.sh`:
```bash
#!/bin/bash
# PreCompact hook: systemMessage만 지원 (hookSpecificOutput 사용 불가)
INPUT=$(cat)
cat <<RESP
{
"systemMessage": "컨텍스트 압축이 시작됩니다. 반드시 다음을 수행하세요:\n\n1. memory/MEMORY.md - 핵심 작업 상태 갱신 (200줄 이내)\n2. memory/project-snapshot.md - 변경된 패키지/타입 정보 업데이트\n3. memory/project-history.md - 이번 세션 변경사항 추가\n4. memory/api-types.md - API 인터페이스 변경이 있었다면 갱신\n5. 미완료 작업이 있다면 TodoWrite에 남기고 memory에도 기록"
}
RESP
```
- `.claude/scripts/on-post-compact.sh`:
```bash
#!/bin/bash
INPUT=$(cat)
CWD=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('cwd',''))" 2>/dev/null || echo "")
if [ -z "$CWD" ]; then
CWD=$(pwd)
fi
PROJECT_HASH=$(echo "$CWD" | sed 's|/|-|g')
MEMORY_DIR="$HOME/.claude/projects/$PROJECT_HASH/memory"
CONTEXT=""
if [ -f "$MEMORY_DIR/MEMORY.md" ]; then
SUMMARY=$(head -100 "$MEMORY_DIR/MEMORY.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="컨텍스트가 압축되었습니다.\\n\\n[세션 요약]\\n${SUMMARY}"
fi
if [ -f "$MEMORY_DIR/project-snapshot.md" ]; then
SNAP=$(head -50 "$MEMORY_DIR/project-snapshot.md" | python3 -c "import sys;print(sys.stdin.read().replace('\\\\','\\\\\\\\').replace('\"','\\\\\"').replace('\n','\\\\n'))" 2>/dev/null)
CONTEXT="${CONTEXT}\\n\\n[프로젝트 최신 상태]\\n${SNAP}"
fi
if [ -n "$CONTEXT" ]; then
CONTEXT="${CONTEXT}\\n\\n위 내용을 참고하여 작업을 이어가세요. 상세 내용은 memory/ 디렉토리의 각 파일을 참조하세요."
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"${CONTEXT}\"}}"
else
echo "{\"hookSpecificOutput\":{\"additionalContext\":\"컨텍스트가 압축되었습니다. memory 파일이 없으므로 사용자에게 이전 작업 내용을 확인하세요.\"}}"
fi
```
- `.claude/scripts/on-commit.sh`:
```bash
#!/bin/bash
INPUT=$(cat)
COMMAND=$(echo "$INPUT" | python3 -c "import sys,json;print(json.load(sys.stdin).get('tool_input',{}).get('command',''))" 2>/dev/null || echo "")
if echo "$COMMAND" | grep -qE 'git commit'; then
cat <<RESP
{
"hookSpecificOutput": {
"additionalContext": "커밋이 감지되었습니다. 다음을 수행하세요:\n1. docs/CHANGELOG.md에 변경 내역 추가\n2. memory/project-snapshot.md에서 변경된 부분 업데이트\n3. memory/project-history.md에 이번 변경사항 추가\n4. API 인터페이스 변경 시 memory/api-types.md 갱신\n5. 프로젝트에 lint 설정이 있다면 lint 결과를 확인하고 문제를 수정"
}
}
RESP
else
echo '{}'
fi
```
`.claude/settings.json`에 hooks 섹션이 없으면 추가 (기존 settings.json의 내용에 병합):
```json
{
"hooks": {
"SessionStart": [
{
"matcher": "compact",
"hooks": [
{
"type": "command",
"command": "bash .claude/scripts/on-post-compact.sh",
"timeout": 10
}
]
}
],
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "bash .claude/scripts/on-pre-compact.sh",
"timeout": 30
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash .claude/scripts/on-commit.sh",
"timeout": 15
}
]
}
]
}
}
```
### 5. Git Hooks 설정
```bash
git config core.hooksPath .githooks
```
`.githooks/` 디렉토리에 실행 권한 부여:
```bash
chmod +x .githooks/*
```
### 6. 프로젝트 타입별 추가 설정
#### java-maven
- `.sdkmanrc` 생성 (java=17.0.18-amzn 또는 프로젝트에 맞는 버전)
- `.mvn/settings.xml` Nexus 미러 설정 확인
- `mvn compile` 빌드 성공 확인
#### java-gradle
- `.sdkmanrc` 생성
- `gradle.properties.example` Nexus 설정 확인
- `./gradlew compileJava` 빌드 성공 확인
#### react-ts
- `.node-version` 생성 (프로젝트에 맞는 Node 버전)
- `.npmrc` Nexus 레지스트리 설정 확인
- `npm install && npm run build` 성공 확인
### 7. .gitignore 확인
다음 항목이 .gitignore에 포함되어 있는지 확인하고, 없으면 추가:
```
.claude/settings.local.json
.claude/CLAUDE.local.md
.env
.env.*
*.local
```
### 8. Git exclude 설정
`.git/info/exclude` 파일을 읽고, 기존 내용을 보존하면서 하단에 추가:
```gitignore
# Claude Code 워크플로우 (로컬 전용)
docs/CHANGELOG.md
*.tmp
```
### 9. Memory 초기화
프로젝트 memory 디렉토리의 위치를 확인하고 (보통 `~/.claude/projects/<project-hash>/memory/`) 다음 파일들을 생성:
- `memory/MEMORY.md` — 프로젝트 분석 결과 기반 핵심 요약 (200줄 이내)
- 현재 상태, 프로젝트 개요, 기술 스택, 주요 패키지 구조, 상세 참조 링크
- `memory/project-snapshot.md` — 디렉토리 구조, 패키지 구성, 주요 의존성, API 엔드포인트
- `memory/project-history.md` — "초기 팀 워크플로우 구성" 항목으로 시작
- `memory/api-types.md` — 주요 인터페이스/DTO/Entity 타입 요약
- `memory/decisions.md` — 빈 템플릿 (# 의사결정 기록)
- `memory/debugging.md` — 빈 템플릿 (# 디버깅 경험 & 패턴)
### 10. Lint 도구 확인
- TypeScript: eslint, prettier 설치 여부 확인. 미설치 시 사용자에게 설치 제안
- Java: checkstyle, spotless 등 설정 확인
- CLAUDE.md에 lint 실행 명령어가 이미 기록되었는지 확인
### 11. workflow-version.json 생성
Gitea API로 최신 팀 워크플로우 버전을 조회:
```bash
curl -sf --max-time 5 "https://gitea.gc-si.dev/gc/template-common/raw/branch/develop/workflow-version.json"
```
조회 성공 시 해당 `version` 값 사용, 실패 시 "1.0.0" 기본값 사용.
`.claude/workflow-version.json` 파일 생성:
```json
{
"applied_global_version": "<조회된 버전>",
"applied_date": "<현재날짜>",
"project_type": "<감지된타입>",
"gitea_url": "https://gitea.gc-si.dev"
}
```
### 12. 검증 및 요약
- 생성/수정된 파일 목록 출력
- `git config core.hooksPath` 확인
- 빌드 명령 실행 가능 확인
- Hook 스크립트 실행 권한 확인
- 다음 단계 안내:
- 개발 시작, 첫 커밋 방법
- 범용 스킬: `/api-registry`, `/changelog`, `/swagger-spec`

파일 보기

@ -0,0 +1,84 @@
---
name: sync-team-workflow
description: 팀 글로벌 워크플로우를 현재 프로젝트에 동기화합니다
allowed-tools: "Bash, Read, Write, Edit, Glob, Grep"
---
팀 글로벌 워크플로우의 최신 버전을 현재 프로젝트에 적용합니다.
## 수행 절차
### 1. 글로벌 버전 조회
Gitea API로 template-common 리포의 workflow-version.json 조회:
```bash
GITEA_URL=$(python3 -c "import json; print(json.load(open('.claude/workflow-version.json')).get('gitea_url', 'https://gitea.gc-si.dev'))" 2>/dev/null || echo "https://gitea.gc-si.dev")
curl -sf "${GITEA_URL}/gc/template-common/raw/branch/develop/workflow-version.json"
```
### 2. 버전 비교
로컬 `.claude/workflow-version.json``applied_global_version` 필드와 비교:
- 버전 일치 → "최신 버전입니다" 안내 후 종료
- 버전 불일치 → 미적용 변경 항목 추출하여 표시
### 3. 프로젝트 타입 감지
자동 감지 순서:
1. `.claude/workflow-version.json``project_type` 필드 확인
2. 없으면: `pom.xml` → java-maven, `build.gradle` → java-gradle, `package.json` → react-ts
### 4. 파일 다운로드 및 적용
Gitea API로 해당 타입 + common 템플릿 파일 다운로드:
#### 4-1. 규칙 파일 (덮어쓰기)
팀 규칙은 로컬 수정 불가 — 항상 글로벌 최신으로 교체:
```
.claude/rules/team-policy.md
.claude/rules/git-workflow.md
.claude/rules/code-style.md (타입별)
.claude/rules/naming.md (타입별)
.claude/rules/testing.md (타입별)
```
#### 4-2. settings.json (부분 갱신)
- `deny` 목록: 글로벌 최신으로 교체
- `allow` 목록: 기존 사용자 커스텀 유지 + 글로벌 기본값 병합
- `hooks`: 글로벌 최신으로 교체
#### 4-3. 스킬 파일 (덮어쓰기)
```
.claude/skills/create-mr/SKILL.md
.claude/skills/fix-issue/SKILL.md
.claude/skills/sync-team-workflow/SKILL.md
.claude/skills/init-project/SKILL.md
```
#### 4-4. Git Hooks (덮어쓰기 + 실행 권한)
```bash
chmod +x .githooks/*
```
#### 4-5. Hook 스크립트 갱신
init-project SKILL.md의 코드 블록에서 최신 스크립트를 추출하여 덮어쓰기:
```
.claude/scripts/on-pre-compact.sh
.claude/scripts/on-post-compact.sh
.claude/scripts/on-commit.sh
```
실행 권한 부여: `chmod +x .claude/scripts/*.sh`
### 5. 로컬 버전 업데이트
`.claude/workflow-version.json` 갱신:
```json
{
"applied_global_version": "새버전",
"applied_date": "오늘날짜",
"project_type": "감지된타입",
"gitea_url": "https://gitea.gc-si.dev"
}
```
### 6. 변경 보고
- `git diff`로 변경 내역 확인
- 업데이트된 파일 목록 출력
- 변경 로그(글로벌 workflow-version.json의 changes) 표시
- 필요한 추가 조치 안내 (빌드 확인, 의존성 업데이트 등)

파일 보기

@ -0,0 +1,6 @@
{
"applied_global_version": "1.2.0",
"applied_date": "2026-02-18",
"project_type": "java-maven",
"gitea_url": "https://gitea.gc-si.dev"
}

33
.editorconfig Normal file
파일 보기

@ -0,0 +1,33 @@
root = true
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[*.{java,kt}]
indent_style = space
indent_size = 4
[*.{js,jsx,ts,tsx,json,yml,yaml,css,scss,html}]
indent_style = space
indent_size = 2
[*.md]
trim_trailing_whitespace = false
[*.{sh,bash}]
indent_style = space
indent_size = 4
[Makefile]
indent_style = tab
[*.{gradle,groovy}]
indent_style = space
indent_size = 4
[*.xml]
indent_style = space
indent_size = 4

71
.githooks/commit-msg Normal file
파일 보기

@ -0,0 +1,71 @@
#!/bin/bash
#==============================================================================
# commit-msg hook
# Conventional Commits 형식 검증 (한/영 혼용 지원)
#==============================================================================
COMMIT_MSG_FILE="$1"
COMMIT_MSG=$(cat "$COMMIT_MSG_FILE")
# Merge 커밋은 검증 건너뜀
if echo "$COMMIT_MSG" | head -1 | grep -qE "^Merge "; then
exit 0
fi
# Revert 커밋은 검증 건너뜀
if echo "$COMMIT_MSG" | head -1 | grep -qE "^Revert "; then
exit 0
fi
# Conventional Commits 정규식
# type(scope): subject
# - type: feat|fix|docs|style|refactor|test|chore|ci|perf (필수)
# - scope: 괄호 제외 모든 문자 허용 — 한/영/숫자/특수문자 (선택)
# - subject: 1자 이상 (길이는 바이트 기반 별도 검증)
PATTERN='^(feat|fix|docs|style|refactor|test|chore|ci|perf)(\([^)]+\))?: .+$'
MAX_SUBJECT_BYTES=200 # UTF-8 한글(3byte) 허용: 72문자 ≈ 최대 216byte
FIRST_LINE=$(head -1 "$COMMIT_MSG_FILE")
if ! echo "$FIRST_LINE" | grep -qE "$PATTERN"; then
echo ""
echo "╔══════════════════════════════════════════════════════════════╗"
echo "║ 커밋 메시지가 Conventional Commits 형식에 맞지 않습니다 ║"
echo "╚══════════════════════════════════════════════════════════════╝"
echo ""
echo " 올바른 형식: type(scope): subject"
echo ""
echo " type (필수):"
echo " feat — 새로운 기능"
echo " fix — 버그 수정"
echo " docs — 문서 변경"
echo " style — 코드 포맷팅"
echo " refactor — 리팩토링"
echo " test — 테스트"
echo " chore — 빌드/설정 변경"
echo " ci — CI/CD 변경"
echo " perf — 성능 개선"
echo ""
echo " scope (선택): 한/영 모두 가능"
echo " subject (필수): 1~72자, 한/영 모두 가능"
echo ""
echo " 예시:"
echo " feat(auth): JWT 기반 로그인 구현"
echo " fix(배치): 야간 배치 타임아웃 수정"
echo " docs: README 업데이트"
echo " chore: Gradle 의존성 업데이트"
echo ""
echo " 현재 메시지: $FIRST_LINE"
echo ""
exit 1
fi
# 길이 검증 (바이트 기반 — UTF-8 한글 허용)
MSG_LEN=$(echo -n "$FIRST_LINE" | wc -c | tr -d ' ')
if [ "$MSG_LEN" -gt "$MAX_SUBJECT_BYTES" ]; then
echo ""
echo " ✗ 커밋 메시지가 너무 깁니다 (${MSG_LEN}바이트, 최대 ${MAX_SUBJECT_BYTES})"
echo " 현재 메시지: $FIRST_LINE"
echo ""
exit 1
fi

25
.githooks/post-checkout Normal file
파일 보기

@ -0,0 +1,25 @@
#!/bin/bash
#==============================================================================
# post-checkout hook
# 브랜치 체크아웃 시 core.hooksPath 자동 설정
# clone/checkout 후 .githooks 디렉토리가 있으면 자동으로 hooksPath 설정
#==============================================================================
# post-checkout 파라미터: prev_HEAD, new_HEAD, branch_flag
# branch_flag=1: 브랜치 체크아웃, 0: 파일 체크아웃
BRANCH_FLAG="$3"
# 파일 체크아웃은 건너뜀
if [ "$BRANCH_FLAG" = "0" ]; then
exit 0
fi
# .githooks 디렉토리 존재 확인
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -d "${REPO_ROOT}/.githooks" ]; then
CURRENT_HOOKS_PATH=$(git config core.hooksPath 2>/dev/null || echo "")
if [ "$CURRENT_HOOKS_PATH" != ".githooks" ]; then
git config core.hooksPath .githooks
chmod +x "${REPO_ROOT}/.githooks/"* 2>/dev/null
fi
fi

33
.githooks/pre-commit Normal file
파일 보기

@ -0,0 +1,33 @@
#!/bin/bash
#==============================================================================
# pre-commit hook (Java Maven)
# Maven 컴파일 검증 — 컴파일 실패 시 커밋 차단
#==============================================================================
echo "pre-commit: Maven 컴파일 검증 중..."
# Maven Wrapper 사용 (없으면 mvn 사용)
if [ -f "./mvnw" ]; then
MVN="./mvnw"
elif command -v mvn &>/dev/null; then
MVN="mvn"
else
echo "경고: Maven이 설치되지 않았습니다. 컴파일 검증을 건너뜁니다."
exit 0
fi
# 컴파일 검증 (테스트 제외, 오프라인 가능)
$MVN compile -q -DskipTests 2>&1
RESULT=$?
if [ $RESULT -ne 0 ]; then
echo ""
echo "╔══════════════════════════════════════════════════════════╗"
echo "║ 컴파일 실패! 커밋이 차단되었습니다. ║"
echo "║ 컴파일 오류를 수정한 후 다시 커밋해주세요. ║"
echo "╚══════════════════════════════════════════════════════════╝"
echo ""
exit 1
fi
echo "pre-commit: 컴파일 성공"

3
.gitignore vendored
파일 보기

@ -34,7 +34,8 @@ application-local.properties
.env.*
secrets/
# Claude Code (local only)
# Claude Code (팀 파일 추적, 로컬 파일만 제외)
!.claude/
.claude/settings.local.json
.claude/CLAUDE.local.md

60
.mvn/settings.xml Normal file
파일 보기

@ -0,0 +1,60 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
프로젝트 레벨 Maven 설정
Nexus 프록시 레포지토리를 통해 의존성을 관리합니다.
사용법: ./mvnw -s .mvn/settings.xml clean compile
또는 MAVEN_OPTS에 설정: export MAVEN_OPTS="-s .mvn/settings.xml"
Nexus 서버: https://nexus.gc-si.dev
- maven-public: Maven Central + Spring + 내부 라이브러리 통합
-->
<settings xmlns="http://maven.apache.org/SETTINGS/1.2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.2.0
https://maven.apache.org/xsd/settings-1.2.0.xsd">
<servers>
<server>
<id>nexus</id>
<username>admin</username>
<password>Gcsc!8932</password>
</server>
</servers>
<mirrors>
<mirror>
<id>nexus</id>
<name>GCSC Nexus Repository</name>
<url>https://nexus.gc-si.dev/repository/maven-public/</url>
<mirrorOf>*</mirrorOf>
</mirror>
</mirrors>
<profiles>
<profile>
<id>nexus</id>
<repositories>
<repository>
<id>central</id>
<url>http://central</url>
<releases><enabled>true</enabled></releases>
<snapshots><enabled>true</enabled></snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<url>http://central</url>
<releases><enabled>true</enabled></releases>
<snapshots><enabled>true</enabled></snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>nexus</activeProfile>
</activeProfiles>
</settings>

3
.sdkmanrc Normal file
파일 보기

@ -0,0 +1,3 @@
# Enable auto-env through SDKMAN config
# Run 'sdk env' in this directory to switch versions
java=17.0.18-amzn

199
CLAUDE.md Normal file
파일 보기

@ -0,0 +1,199 @@
# Signal Batch - Vessel Signal Batch Aggregation System
## 빌드 및 실행
```bash
# 빌드 (Maven)
mvn clean package -DskipTests
# 프로파일별 실행
java -jar target/vessel-batch-aggregation.jar --spring.profiles.active=prod
java -jar target/vessel-batch-aggregation.jar --spring.profiles.active=prod-mpr
java -jar target/vessel-batch-aggregation.jar --spring.profiles.active=dev
java -jar target/vessel-batch-aggregation.jar --spring.profiles.active=local
java -jar target/vessel-batch-aggregation.jar --spring.profiles.active=query
```
## 프로젝트 개요
- **설명**: 선박 항적(Track) 실시간 수집 및 배치 집계 시스템
- **Java**: 17
- **Spring Boot**: 3.2.5
- **DB**: PostgreSQL + PostGIS
- **빌드도구**: Maven (pom.xml)
## 프로파일 구성
| 프로파일 | 용도 | 배치 | DataSource | 포트 |
|---------|------|-----|-----------|------|
| **prod** | 운영환경 | 활성화 | 3개 분리 | 18090 |
| **prod-mpr** | 운영환경(MPR) | 활성화 | 3개 분리 | 18090 |
| **dev** | 개발환경 | 활성화 | 3개 분리 | 18090 |
| **local** | 로컬개발 | 비활성화 | 단일 | 8090 |
| **query** | 조회전용 | 비활성화 | 단일 | 8090 |
## 핵심 패키지 구조
```
gc.mda.signal_batch/
├── batch/ # 배치 처리 (Job, Processor, Reader, Writer)
│ ├── job/ # Job 설정 및 스케줄러
│ ├── reader/ # ItemReader (파티션, 메모리)
│ ├── processor/ # ItemProcessor (항적 변환, 비정상 검출)
│ ├── writer/ # ItemWriter (Bulk Insert, Upsert)
│ └── listener/ # 배치 리스너
├── domain/
│ ├── gis/ # GIS API (해구, 구역, 타일)
│ ├── vessel/ # 선박 항적/위치 조회, 필터링
│ ├── track/ # 비정상 항적 검출 API
│ ├── passage/ # 순차 영역 통과 조회
│ ├── ship/ # 선박 이미지 API
│ └── debug/ # 디버그 API
├── global/
│ ├── config/ # DataSource, WebSocket, Batch 설정
│ ├── util/ # 공통 유틸리티
│ ├── websocket/ # WebSocket STOMP 스트리밍
│ └── tool/ # 배치 진단 도구
├── migration/ # 데이터 마이그레이션
└── monitoring/ # 모니터링, 메트릭, 성능 최적화
```
## DataSource 구성 (3개)
1. **CollectDataSource**: 원본 신호 수집 (읽기 전용)
2. **QueryDataSource**: 집계 데이터 조회/쓰기
3. **BatchDataSource**: Spring Batch 메타데이터
설정 클래스:
- `DevDataSourceConfig.java` (dev)
- `ProdDataSourceConfig.java` (prod/prod-mpr)
- `LocalDataSourceConfig.java` (local)
- `QueryDataSourceConfig.java` (query)
## 주요 API 엔드포인트
### REST API V1 (WKT 응답)
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/v1/haegu/boundaries` | 해구 경계 |
| GET | `/api/v1/haegu/vessel-stats` | 해구별 선박 통계 |
| GET | `/api/v1/tracks/haegu/{no}` | 해구별 항적 |
| GET | `/api/v1/tracks/area/{areaId}` | 영역별 항적 |
| POST | `/api/v1/tracks/vessels` | 선박별 항적 조회 (일괄) |
| GET | `/api/v1/vessels/recent-positions` | 최근 위치 업데이트 선박 |
### REST API V2 (JSON/CompactVesselTrack)
| 메서드 | 경로 | 설명 |
|--------|------|------|
| POST | `/api/v2/tracks/vessels` | 선박별 항적 (JSON 배열 응답) |
| GET | `/api/v2/tracks/haegu/{no}` | 해구별 항적 (JSON) |
### 비정상 항적 API
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/v1/abnormal-tracks/recent` | 최근 비정상 항적 |
| GET | `/api/v1/abnormal-tracks/vessel/{sigSrcCd}/{targetId}` | 특정 선박 비정상 이력 |
| GET | `/api/v1/abnormal-tracks/statistics` | 비정상 통계 |
| POST | `/api/v1/abnormal-tracks/detect` | 사용자 정의 기준 검출 |
### 기타 API
| 메서드 | 경로 | 설명 |
|--------|------|------|
| POST | `/api/v1/passages/sequential` | 순차 구역 통과 조회 |
| GET | `/api/v1/shipimg/{imo}` | 선박 이미지 조회 |
| GET | `/api/v1/tiles/{z}/{x}/{y}` | 타일 집계 데이터 |
### WebSocket (STOMP)
- **엔드포인트**: `/ws-tracks` (네이티브), `/ws-tracks` + SockJS
- **쿼리 요청**: `/app/tracks/query`
- **쿼리 취소**: `/app/tracks/cancel/{queryId}`
- **응답 수신**: `/user/queue/tracks/response`
- **청크 데이터**: `/user/queue/tracks/chunk`
- **상태 업데이트**: `/user/queue/tracks/status`
## 배치 Job
| Job | Cron | 지연 | 역할 |
|-----|------|-----|------|
| **incremental** | 매 5분 (3,8,13...) | 3분 | 수집 DB → 위치 집계 |
| **track** | 매 5분 (4,9,14...) | 4분 | 위치 → 항적(LineStringM) 변환 |
| **hourly** | 매시 10분 | - | 5분 → 시간 집계, 비정상 검출 |
| **daily** | 매일 01:00 | - | 시간 → 일 집계, 비정상 검출 |
### 배치 처리 흐름
```
CollectDB (신호)
→ incremental Job (5분 버킷 집계)
→ track Job (LineStringM 항적 생성)
→ hourly Job (시간 병합 + 비정상 검출)
→ daily Job (일 병합 + 비정상 검출)
```
## 코드 스타일
- Lombok 사용 (@Data, @Builder, @Slf4j)
- JdbcTemplate 직접 사용 (JPA 미사용)
- PostGIS 공간 쿼리 활용
- 청크 기반 배치 처리 (기본 10,000건)
- UPSERT/Bulk Insert 최적화
## 주요 DTO
- `CompactVesselTrack`: WebSocket/REST V2 응답 (geometry[], timestamps[], speeds[], nationalCode, shipKindCode)
- `TrackResponse`: REST V1 응답 (WKT 기반)
- `VesselTracksRequest`: 항적 조회 요청
- `AbnormalTrackResponse`: 비정상 항적 응답
- `SequentialPassageRequest/Response`: 순차 통과 조회
## 최근 작업 이력
### 2026-01-20
- V2 REST API 추가 (WebSocket 응답 호환)
- `GisControllerV2.java`, `GisServiceV2.java`
- CompactVesselTrack 확장: nationalCode, shipKindCode, integrationTargetId 추가
- WebSocket 청크 스트리밍 구현 (`ChunkedTrackStreamingService`)
- 선박 최신 위치 캐시 갱신 스케줄러 추가
- DateTime 파싱 유연화 (`FlexibleLocalDateTimeDeserializer`)
- Swagger 9개 API 그룹 체계화
## 주의사항
- 빌드: `mvn` 사용 (Gradle 아님)
- 프로파일별 DataSource 설정이 다름
- WebSocket은 STOMP 프로토콜 사용
- LineStringM 형식: `LINESTRING M(lon lat unixTimestamp, ...)`
- 비정상 항적 검출: 시간/일 집계 시 자동 수행
## 성능 설정 (prod)
```yaml
batch:
chunk-size: 10000
partition-size: 12
fetch-size: 200000
bulk-insert:
batch-size: 10000
parallel-threads: 8
cache:
latest-position:
ttl-minutes: 60
max-size: 60000
abnormal-detection:
5min-speed-threshold: 500 knots
hourly-daily-speed-limit: 500 knots
```
## Swagger 문서
- **접근**: `http://localhost:{port}/swagger-ui.html`
- **API 그룹**: 항적조회, 비정상항적, 타일, 선박이미지, 성능최적화, 관리자, 모니터링, 마이그레이션, 디버그
## 팀 규칙
- 코드 스타일: `.claude/rules/code-style.md` 참조
- 네이밍 규칙: `.claude/rules/naming.md` 참조
- 테스트 규칙: `.claude/rules/testing.md` 참조
- Git 워크플로우: `.claude/rules/git-workflow.md` 참조
- 팀 정책: `.claude/rules/team-policy.md` 참조

파일 보기

@ -0,0 +1,314 @@
# 일일 캐시 성능 벤치마크 보고서
## 선박 항적 리플레이 서비스 — 캐시 vs DB 정량 비교
| 항목 | 내용 |
|------|------|
| 측정일 | 2026-02-07 |
| 대상 시스템 | Signal Batch — ChunkedTrackStreamingService (WebSocket 스트리밍) |
| 운영 환경 | prod 프로파일, Query DB 커넥션 풀 180 |
| 캐시 구성 | DailyTrackCacheManager — D-1 ~ D-7 인메모리 캐시, STRtree 공간 인덱스 |
| 측정 방식 | QueryBenchmark 내부 클래스 → `cache-benchmark.log` JSON 기록 |
| 샘플 수 | 12건 (CACHE 3, DB 2, HYBRID 5, CACHE+Today 2) |
---
## 1. 측정 경로 분류
쿼리 시간 범위에 따라 4가지 경로로 처리된다.
| 경로 | 설명 | 데이터 소스 |
|------|------|------------|
| **CACHE** | 요청 일자 전체가 인메모리 캐시에 존재 | 메모리 |
| **DB** | 캐시 미스 — Daily 테이블 직접 조회 | DB |
| **HYBRID** | 캐시 히트 일자 + 캐시 범위 밖 일자 DB 조회 | 메모리 + DB |
| **CACHE+Today** | 캐시 히트 + 오늘 데이터(Hourly/5min 테이블) | 메모리 + DB |
### 오늘 데이터 구간 구조
오늘(D-0) 데이터는 캐시 대상이 아니며, 시간 경과에 따라 두 테이블로 분할 조회된다.
```
오늘 00:00 ~ 12:00 12:00 ~ 12:35 현재(12:40)
├──── Hourly 테이블 조회 ──────┤── 5min 조회 ──┤
(12개 범위, 1시간 단위) (7개 범위, 5분 단위)
```
- **Hourly**: 자정부터 약 1시간 전까지 → 시간 단위 범위 (약 12개)
- **5min**: 최근 약 1시간 이내 → 5분 단위 범위 (약 7개)
- 각 범위마다 DB 커넥션 1회 + Viewport Pass1 1회 발생 → 오늘 구간 커넥션 = 범위 수 × 2
---
## 2. 전체 측정 데이터
### 2.1 요약 테이블
| # | 경로 | Zoom | 일수 | 캐시/DB | 선박 수 | 트랙 수 | 응답시간(ms) | DB커넥션 | DB쿼리시간(ms) |
|---|------|------|------|---------|---------|---------|-------------|----------|---------------|
| 1 | CACHE | 10 | 3 | 3/0 | 443 | 986 | **575** | 3 | 0 |
| 2 | DB | 10 | 2 | 0/2 | 352 | 587 | **7,221** | 8 | 3,475 |
| 3 | DB | 10 | 2 | 0/2 | 12,253 | 18,502 | **8,195** | 19 | 1,443 |
| 4 | CACHE | 10 | 2 | 2/0 | 10,690 | 16,942 | **1,439** | 2 | 0 |
| 5 | CACHE | 10 | 2 | 2/0 | 10,690 | 16,942 | **1,374** | 2 | 0 |
| 6 | HYBRID | 8 | 5 | 3/2 | 9,958 | 29,362 | **8,900** | 16 | 3,301 |
| 7 | HYBRID | 9 | 5 | 3/2 | 547 | 1,927 | **1,373** | 11 | 550 |
| 8 | HYBRID | 8 | 5 | 3/2 | 4,589 | 12,422 | **2,910** | 12 | 715 |
| 9 | HYBRID | 8 | 5 | 3/2 | 5,760 | 23,283 | **3,651** | 15 | 1,048 |
| 10 | CACHE+Today | 10 | 3+오늘 | 3/0 | 105 | 301 | **6,091** | 56 | 0 |
| 11 | HYBRID | 8 | 5 | 3/2 | 52,151 | 162,849 | **105,212** | 45 | 93,319 |
| 12 | CACHE+Today | 12 | 3+오늘 | 3/0 | 6,990 | 17,024 | **9,744** | 56 | 0 |
### 2.2 DB 커넥션 세분화
| # | 경로 | 합계 | Viewport Pass1 | Daily Pages | Hourly/5min | TableCheck |
|---|------|------|----------------|-------------|-------------|------------|
| 1 | CACHE | 3 | 0 | 0 | 0 | **3** |
| 2 | DB | 8 | 2 | 2 | 0 | 2 |
| 3 | DB | 19 | 2 | 2 | 0 | 2 |
| 4 | CACHE | 2 | 0 | 0 | 0 | **2** |
| 5 | CACHE | 2 | 0 | 0 | 0 | **2** |
| 6 | HYBRID | 16 | 2 | 2 | 0 | 5 |
| 7 | HYBRID | 11 | 2 | 2 | 0 | 5 |
| 8 | HYBRID | 12 | 2 | 2 | 0 | 5 |
| 9 | HYBRID | 15 | 2 | 2 | 0 | 5 |
| 10 | CACHE+Today | 56 | **21** | 0 | **21** | **14** |
| 11 | HYBRID | 45 | 2 | **6** | 0 | 5 |
| 12 | CACHE+Today | 56 | **21** | 0 | **21** | **14** |
> 합산 검증: 전 12건 모두 세분화 카운터 합 = 합계 일치 확인 (VesselInfo 카운터 포함, 표에서는 생략).
**CACHE+Today (#10, #12) 커넥션 56건 내역**:
- Hourly/5min 21건: 오늘 00:00~현재 구간 (Hourly 약 12건 + 5min 약 7건 + 폴백)
- Viewport Pass1 21건: 동일 범위에 대한 뷰포트 교차 선박 수집 (범위당 1회)
- TableCheck 14건: Daily 3건 + Hourly/5min 존재 확인 약 11건
### 2.3 캐시 경로 간소화 지표
캐시 경로에서는 원본 데이터를 메모리에 보유하므로 간소화 전/후를 측정할 수 있다.
| # | 경로 | Zoom | 원본 포인트 | 간소화 후 | 압축률 | 간소화 시간(ms) | 배치 감소 |
|---|------|------|------------|----------|--------|----------------|-----------|
| 1 | CACHE | 10 | 1,083,566 | 11,212 | 99% | 133 | 50→3 (94%) |
| 4 | CACHE | 10 | 13,502,970 | 172,066 | 99% | 1,075 | 602→10 (98%) |
| 5 | CACHE | 10 | 13,502,970 | 172,066 | 99% | 981 | 602→10 (98%) |
| 6 | HYBRID | 8 | 7,582,515 | 152,734 | 98% | 500 | 335→12 (96%) |
| 7 | HYBRID | 9 | 1,049,434 | 11,634 | 99% | 74 | 50→5 (90%) |
| 8 | HYBRID | 8 | 1,618,310 | 61,434 | 96% | 125 | 72→5 (93%) |
| 9 | HYBRID | 8 | 3,202,500 | 155,633 | 95% | 277 | 137→12 (91%) |
| 10 | CACHE+Today | 10 | 355,256 | 4,159 | 99% | 24 | 17→6 (65%) |
| 11 | HYBRID | 8 | 41,634,918 | 732,470 | 98% | 2,411 | 1,813→42 (98%) |
| 12 | CACHE+Today | 12 | 14,404,225 | 259,541 | 98% | 1,258 | 639→23 (96%) |
> DB 경로(#2, #3)는 SQL 레벨에서 `ST_Simplify` 적용 후 수신하므로 앱 레벨 압축률 산출 불가 (before = after).
---
## 3. 경로별 정량 비교
### 3.1 CACHE vs DB — 동일 규모 직접 비교
#### 대규모: #4 CACHE vs #3 DB
| 지표 | DB (#3) | CACHE (#4) | 개선 |
|------|---------|------------|------|
| 선박 수 | 12,253 | 10,690 | (유사 규모) |
| **응답시간** | 8,195 ms | 1,439 ms | **5.7배 빨라짐** |
| **DB 커넥션** | 19 | 2 | **89% 감소** |
| DB 쿼리 시간 | 1,443 ms | 0 ms | **100% 절감** |
| 배치 전송 수 | 11 | 10 | 유사 |
#### 소규모: #2 DB vs #1 CACHE
| 지표 | DB (#2) | CACHE (#1) | 개선 |
|------|---------|------------|------|
| 선박 수 | 352 | 443 | (유사 규모) |
| **응답시간** | 7,221 ms | 575 ms | **12.6배 빨라짐** |
| **DB 커넥션** | 8 | 3 | **63% 감소** |
| DB 쿼리 시간 | 3,475 ms | 0 ms | **100% 절감** |
| 배치 전송 수 | 2 | 3 | 유사 |
### 3.2 HYBRID 경로 — 규모별 성능 변화
5일 범위 쿼리 (캐시 3일 + DB 2일):
| # | 선박 수 | 응답시간 | DB커넥션 | DB쿼리시간 |
|---|---------|---------|----------|-----------|
| 7 | 547 | 1,373 ms | 11 | 550 ms |
| 8 | 4,589 | 2,910 ms | 12 | 715 ms |
| 9 | 5,760 | 3,651 ms | 15 | 1,048 ms |
| 6 | 9,958 | 8,900 ms | 16 | 3,301 ms |
| 11 | 52,151 | 105,212 ms | 45 | 93,319 ms |
- 소규모(~500척): 캐시 일자가 대부분의 처리를 흡수하여 **1.4초** 수준으로 응답.
- 중규모(5K~10K척): DB 쿼리 부담 증가하나 캐시 일자가 완충하여 **3~9초** 수준.
- 대규모(52K척): 캐시 미스 일자의 데이터량이 크면 DB 의존도가 높아져 **100초+** 수준.
- 캐시 적용 일수가 많을수록(현재 3/5일 = 60%) HYBRID 경로의 DB 부담이 경감된다.
### 3.3 CACHE+Today 경로 — 오늘 데이터 포함 쿼리
| # | Zoom | 선박 수 | 응답시간 | DB커넥션 | 오늘 구간 커넥션 |
|---|------|---------|---------|----------|----------------|
| 10 | 10 | 105 | 6,091 ms | 56 | 42 (H5m 21 + VP 21) |
| 12 | 12 | 6,990 | 9,744 ms | 56 | 42 (H5m 21 + VP 21) |
**핵심 발견**:
- 두 쿼리 모두 동일한 시간 범위(3일+오늘)이므로 커넥션 구조가 동일하며, 뷰포트 크기만 다름.
- 오늘 구간(00:00~현재)만으로 **42건의 DB 커넥션**이 발생하여, 순수 CACHE 경로(2~3건)와 큰 차이를 보인다.
- 선박 수가 적은 #10(105척)도 6초가 소요되며, 이는 오늘 구간의 범위별 개별 커넥션 오버헤드가 원인이다.
### 3.4 줌 레벨별 간소화 효과
| Zoom | 대표 # | 원본 포인트 | 간소화 후 | 압축률 | 선박당 평균 포인트 |
|------|--------|------------|----------|--------|------------------|
| 8 | #6 | 7,582,515 | 152,734 | 98% | 15.3 |
| 9 | #7 | 1,049,434 | 11,634 | 99% | 21.3 |
| 10 | #4 | 13,502,970 | 172,066 | 99% | 16.1 |
| 12 | #12 | 14,404,225 | 259,541 | 98% | 37.1 |
- 줌 8~10: 선박당 15~21 포인트로 압축 — 해역 수준 조회에 최적.
- 줌 12: 선박당 37 포인트 — 항만 수준 상세 조회에서 더 많은 포인트를 유지.
- 전 줌 레벨에서 95~99% 압축률 달성.
---
## 4. DB 커넥션 구성 분석
### 4.1 경로별 커넥션 구성 패턴
```
CACHE (순수) [==TC==] 2~3건
TableCheck만 발생
DB (순수) [VP][DA][..기타..][TC] 8~19건
각 항목 균등 분포
HYBRID [VP][DA][..기타..........][TC---] 11~45건
규모에 비례 증가
CACHE+Today [VP----------][H5m---------][TC------] 56건
오늘 구간의 Hourly/5min + Viewport가 대부분
```
### 4.2 커넥션 풀 영향 분석
Query DataSource 커넥션 풀 180 기준:
| 경로 | 쿼리당 사용 | 동시 10쿼리 시 누적 | 풀 압박 수준 |
|------|------------|-------------------|------------|
| CACHE | 2~3 | 30 | 매우 낮음 (17%) |
| HYBRID (소규모) | 11~15 | 150 | 보통 (83%) |
| DB | 8~19 | 190 | 보통~높음 |
| CACHE+Today | 56 | 560 | 높음 |
> 커넥션은 순간 점유가 아닌 순차 사용이므로 실제 동시 점유 수는 위 수치보다 작다. 캐시 적용으로 전체 쿼리 중 CACHE 경로 비율이 높아지면 풀 전체 부담이 크게 감소한다.
---
## 5. 종합 성능 비교
### 5.1 핵심 개선 지표
| 지표 | DB 경로 | CACHE 경로 | 개선율 |
|------|---------|------------|--------|
| 응답시간 (대규모, 만 척 이상) | 8,195 ms | 1,439 ms | **5.7배** |
| 응답시간 (소규모, 수백 척) | 7,221 ms | 575 ms | **12.6배** |
| DB 커넥션 수 (대규모) | 19건 | 2건 | **89% 감소** |
| DB 커넥션 수 (소규모) | 8건 | 3건 | **63% 감소** |
| DB 쿼리 시간 | 1,443~3,475 ms | 0 ms | **100% 절감** |
| 포인트 간소화 | SQL ST_Simplify | 앱 레벨 95~99% | 캐시만 측정 가능 |
### 5.2 경로별 응답시간 분포
```
응답시간 (ms, 로그 스케일 아님)
경로 0 2,000 4,000 6,000 8,000 10,000
CACHE (순수) |█| 575~1,439
HYBRID (소규모) |██| 1,373
HYBRID (중규모) |█████| 2,910~3,651
CACHE+Today |████████████| 6,091~9,744
DB (순수) |████████████████| 7,221~8,195
HYBRID (대규모) |██████████████████| 8,900
```
> HYBRID 대규모(#11, 52K척, 105초)는 스케일 초과로 표시 생략.
### 5.3 캐시 적용에 따른 운영 시나리오별 예측
D-1 ~ D-7 캐시가 적용된 상태에서:
| 사용 패턴 | 예상 경로 | 예상 응답시간 | DB 커넥션 |
|----------|----------|-------------|----------|
| 과거 1~7일만 조회 | CACHE | **0.5~1.5초** | 2~3건 |
| 과거 수일 + 오늘 | CACHE+Today | 6~10초 | ~56건 |
| 7일 이전 과거 포함 | HYBRID / DB | 1~9초 (규모 의존) | 8~45건 |
---
## 6. 캐시 범위 확장 시 권장 구성
현재 D-1 ~ D-7 캐시 구성에서 조회 기간 범위를 확장하고자 할 경우, 아래 구성을 권장한다.
### 6.1 현재 구성
```yaml
cache:
daily-track:
enabled: true
retention-days: 7 # D-1 ~ D-7 캐시
max-memory-gb: 6 # 최대 메모리 사용량
warmup-async: true # 비동기 워밍업
```
- 7일 이내 과거 조회: CACHE 경로 (0.5~1.5초)
- 7일 초과 과거 포함: HYBRID/DB 경로로 폴백
### 6.2 확장 권장안
| 시나리오 | retention-days | max-memory-gb | 예상 효과 |
|----------|---------------|---------------|----------|
| **현재** | 7 | 6 | 1주일 이내 CACHE, 이후 DB |
| **2주 확장** | 14 | 12 | 2주 리플레이까지 CACHE 커버 |
| **1개월 확장** | 30 | 25 | 월간 분석 조회까지 CACHE 커버 |
**확장 시 고려사항**:
1. **메모리 산정**: 현재 7일 캐시 ≈ 4GB 기준, 선형 증가 추정.
- 14일: ~12GB, 30일: ~25GB
- 서버 가용 메모리와 JVM 힙 설정(`-Xmx`) 여유 확인 필요.
2. **워밍업 시간**: retention-days 증가에 비례하여 초기 로드 시간 증가.
- 7일: 약 1~2분, 14일: 약 2~4분, 30일: 약 5~10분 (비동기이므로 서비스 가용성 영향 없음)
3. **HYBRID 비율 감소**: retention-days 확장 시 DB 폴백 빈도가 줄어, HYBRID 경로가 줄고 순수 CACHE 경로 비율이 증가한다. 이는 DB 커넥션 풀 부담 경감에 직접 기여한다.
4. **CACHE+Today 경로는 retention-days와 무관**: 오늘(D-0) 데이터는 항상 Hourly/5min 테이블에서 DB 조회한다. 이 구간의 커넥션 최적화는 별도 과제이다.
### 6.3 단계적 확장 전략
```
Phase 1 (현재) : retention-days=7, max-memory-gb=6 → 1주 커버
Phase 2 (권장) : retention-days=14, max-memory-gb=12 → 2주 커버, 주간 비교 분석 지원
Phase 3 (선택) : retention-days=30, max-memory-gb=25 → 월간 커버, 장기 항적 분석 지원
```
각 단계 전환 시 서버 메모리 여유와 워밍업 시간을 모니터링하며, JVM 힙 설정을 함께 조정한다.
---
## 7. 결론
### 7.1 캐시 효과 확인
1. **응답시간**: 순수 CACHE 경로에서 DB 대비 **5.7~12.6배** 빨라짐 확인.
2. **DB 커넥션**: 순수 CACHE 경로에서 DB 대비 **63~89%** 감소 확인.
3. **간소화**: 캐시 경로에서 줌 레벨에 따라 **95~99%** 포인트 압축, 배치 전송 수 **90~98%** 감소.
4. **DB 쿼리 시간**: CACHE 경로에서 **0ms** — DB 부하 완전 제거.
### 7.2 운영 권장사항
| 항목 | 현황 | 권장 방향 |
|------|------|----------|
| 캐시 보존 기간 | 7일 | 사용 패턴에 따라 14~30일로 확장 검토 |
| CACHE+Today 커넥션 | 오늘 구간 범위별 개별 DB 커넥션 (56건) | 오늘 데이터 범위 병합 또는 별도 캐시 검토 |

파일 보기

@ -0,0 +1,102 @@
# 일일 캐시 성능 개선 요약보고서
| 항목 | 내용 |
|------|------|
| 측정일 | 2026-02-07 |
| 대상 | 선박 항적 리플레이 서비스 (WebSocket 스트리밍) |
| 개선 내용 | 일일(Daily) 집계 데이터 7일분 인메모리 캐시 적용 |
| 측정 건수 | 12건 (CACHE 3, DB 2, HYBRID 5, CACHE+Today 2) |
---
## 1. 핵심 성능 개선 지표
| 지표 | DB 경로 (개선 전) | CACHE 경로 (개선 후) | 개선율 |
|------|-------------------|---------------------|--------|
| **응답시간** (만 척 이상) | 8.2초 | 1.4초 | **5.7배 단축** |
| **응답시간** (수백 척) | 7.2초 | 0.6초 | **12.6배 단축** |
| **DB 커넥션** (만 척 이상) | 19건 | 2건 | **89% 감소** |
| **DB 커넥션** (수백 척) | 8건 | 3건 | **63% 감소** |
| **DB 쿼리 시간** | 1.4 ~ 3.5초 | 0초 | **100% 절감** |
| **포인트 압축률** | SQL 처리 | 앱 레벨 95 ~ 99% | 동등 품질 유지 |
---
## 2. 경로별 응답시간 비교
```
경로 응답시간
CACHE (순수) ██ 0.6 ~ 1.4초
HYBRID (소규모) ██ 1.4초
HYBRID (중규모) █████ 2.9 ~ 3.7초
CACHE+Today ████████████ 6.1 ~ 9.7초
DB (순수) ████████████████ 7.2 ~ 8.2초
```
- **CACHE**: 캐시 범위 내 과거 데이터만 조회 시, 가장 빠른 응답
- **HYBRID**: 캐시 + DB 병합 — 캐시 비율이 높을수록 DB 부담 경감
- **CACHE+Today**: 오늘 데이터 포함 시, Hourly/5min 테이블 개별 조회로 커넥션 다수 발생
---
## 3. DB 커넥션 풀 부담 변화
Query DataSource 커넥션 풀 180 기준:
| 경로 | 쿼리당 커넥션 | 동시 10쿼리 | 풀 사용률 |
|------|-------------|------------|----------|
| CACHE | 2 ~ 3 | ~30 | **17%** (여유) |
| HYBRID (소규모) | 11 ~ 15 | ~150 | 83% |
| DB | 8 ~ 19 | ~190 | 100%+ |
> 캐시 적용으로 전체 쿼리 중 CACHE 경로 비율이 높아지면, DB 커넥션 풀 전체 부담이 크게 감소한다.
---
## 4. 간소화 파이프라인 효과
캐시 경로에서 원본 데이터 → 3단계 간소화(Douglas-Peucker + 거리/시간 샘플링 + 줌 레벨 샘플링) 적용:
| 줌 레벨 | 원본 포인트 | 간소화 후 | 압축률 | 선박당 평균 |
|---------|------------|----------|--------|-----------|
| 8 | 7.6M | 153K | 98% | 15 포인트 |
| 9 | 1.0M | 12K | 99% | 21 포인트 |
| 10 | 13.5M | 172K | 99% | 16 포인트 |
| 12 | 14.4M | 260K | 98% | 37 포인트 |
- 간소화 CPU 시간: 24ms ~ 1,258ms (DB 대기 없이 순수 CPU 연산)
- 전 줌 레벨에서 95 ~ 99% 데이터 압축 달성
---
## 5. 운영 시나리오별 예상 성능
| 사용 패턴 | 예상 경로 | 예상 응답시간 | DB 커넥션 |
|----------|----------|-------------|----------|
| 과거 1~7일만 조회 | CACHE | **0.6 ~ 1.4초** | 2~3건 |
| 과거 수일 + 오늘 | CACHE+Today | 6 ~ 10초 | ~56건 |
| 7일 이전 과거 포함 | HYBRID / DB | 1 ~ 9초 (규모 의존) | 8~45건 |
---
## 6. 향후 확장 권장안
| 시나리오 | 캐시 보존 기간 | 메모리 | 효과 |
|----------|---------------|--------|------|
| 현재 | 7일 | 6GB | 1주 이내 CACHE 경로 |
| 2주 확장 | 14일 | 12GB | 주간 비교 분석 지원 |
| 1개월 확장 | 30일 | 25GB | 월간 항적 분석 지원 |
> 캐시 보존 기간 확장 시 HYBRID 경로 비율이 줄고 순수 CACHE 비율 증가 → DB 부담 추가 경감
---
## 7. 결론
| 항목 | 효과 |
|------|------|
| 응답 속도 | DB 대비 **5.7 ~ 12.6배** 단축 |
| DB 부하 | 커넥션 **63 ~ 89%** 감소, 쿼리 시간 **100%** 절감 |
| 데이터 품질 | 줌 레벨별 95 ~ 99% 압축, DB 경로와 동등 품질 |
| 동시 사용자 수용 | DB 커넥션 경합 해소로 동시 처리 가능 수 증가 |
| 확장성 | 캐시 보존 기간 확장으로 추가 개선 가능 |

파일 보기

@ -77,6 +77,12 @@
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<!-- WebFlux (WebClient for S&P AIS API) -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>

219
scripts/deploy-only.bat Normal file
파일 보기

@ -0,0 +1,219 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Deploy Only Script
REM (Build with IntelliJ UI first)
REM ===============================================
setlocal enabledelayedexpansion
REM Configuration
set "SERVER_IP=10.26.252.51"
set "SERVER_USER=root"
set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
set "JAR_NAME=vessel-batch-aggregation.jar"
set "BACKUP_DIR=!SERVER_PATH!/backups"
echo ===============================================
echo Signal Batch Deploy System (Deploy Only)
echo ===============================================
echo [INFO] Deploy Start: !date! !time!
echo [INFO] Target Server: !SERVER_IP!
echo.
REM 1. Set correct working directory and check JAR file
echo =============== Working Directory Setup ===============
echo [INFO] Current directory: !CD!
echo [INFO] Script directory: %~dp0
REM Change to project root directory (parent of scripts)
cd /d "%~dp0.."
echo [INFO] Project root directory: !CD!
echo.
echo =============== JAR File Check ===============
set "JAR_PATH=target\!JAR_NAME!"
if not exist "!JAR_PATH!" (
echo [ERROR] JAR file not found: !JAR_PATH!
echo [INFO] Current directory: !CD!
echo.
echo Please build the project first using IntelliJ IDEA:
echo 1. Open Maven tool window: View ^> Tool Windows ^> Maven
echo 2. Double-click: Lifecycle ^> clean
echo 3. Double-click: Lifecycle ^> package
echo 4. Verify target/!JAR_NAME! exists
echo.
echo Checking for any JAR files in target directory:
if exist "target\" (
dir target\*.jar 2>nul
if !ERRORLEVEL! neq 0 (
echo [INFO] Target directory exists but no JAR files found
)
) else (
echo [INFO] Target directory does not exist - project not built yet
)
pause
exit /b 1
)
for %%I in ("!JAR_PATH!") do (
echo [INFO] JAR File: %%~nxI
echo [INFO] File Size: %%~zI bytes
echo [INFO] Modified: %%~tI
)
echo [SUCCESS] JAR file ready for deployment
REM 2. SSH Connection Test
echo.
echo =============== SSH Connection Test ===============
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "echo 'SSH connection OK'" 2>nul
set CONNECTION_RESULT=!ERRORLEVEL!
if !CONNECTION_RESULT! neq 0 (
echo [ERROR] SSH connection failed
echo [INFO] Please check:
echo - SSH key authentication setup
echo - Network connectivity to !SERVER_IP!
echo - Server is accessible
echo.
echo Run setup-ssh-key.bat to configure SSH keys
pause
exit /b 1
)
echo [SUCCESS] SSH connection successful
REM 3. Check current server status
echo.
echo =============== Current Server Status ===============
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status" 2>nul
set SERVER_RUNNING=!ERRORLEVEL!
REM 4. Create backup
echo.
echo =============== Create Backup ===============
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "mkdir -p !BACKUP_DIR!"
REM Generate backup timestamp
for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do if not "%%I"=="" set DATETIME=%%I
set BACKUP_TIMESTAMP=!DATETIME:~0,8!_!DATETIME:~8,6!
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "if [ -f !SERVER_PATH!/!JAR_NAME! ]; then echo '[INFO] Creating backup...'; cp !SERVER_PATH!/!JAR_NAME! !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!; echo '[INFO] Backup created: !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!'; ls -la !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!; else echo '[INFO] No existing JAR file to backup (first deployment)'; fi"
REM 5. Stop application
if !SERVER_RUNNING! equ 0 (
echo.
echo =============== Stop Application ===============
echo [INFO] Stopping running application...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh stop"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to stop application
exit /b 1
)
echo [SUCCESS] Application stopped
) else (
echo.
echo [INFO] Application not running, proceeding with deployment
)
REM 6. Deploy new JAR
echo.
echo =============== Deploy New JAR ===============
echo [INFO] Transferring JAR file...
scp "!JAR_PATH!" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
if !ERRORLEVEL! neq 0 (
echo [ERROR] File transfer failed
goto :rollback_option
)
echo [INFO] Setting permissions...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "chmod 644 !SERVER_PATH!/!JAR_NAME!"
echo [SUCCESS] JAR file deployed
REM 7. Transfer version info (if exists)
echo.
echo =============== Version Information ===============
if exist "target\version.txt" (
echo [INFO] Transferring version information...
scp "target\version.txt" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
) else (
echo [INFO] No version file found, creating basic version info...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "echo 'DEPLOY_TIME=!date! !time!' > !SERVER_PATH!/version.txt"
)
REM 8. Start application
echo.
echo =============== Start Application ===============
echo [INFO] Starting application...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh start"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to start application
goto :rollback_option
)
REM 9. Wait and verify
echo.
echo =============== Deployment Verification ===============
echo [INFO] Waiting for application startup (30 seconds)...
timeout /t 30 /nobreak > nul
echo [INFO] Checking application status...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Application not running properly
goto :rollback_option
)
echo [INFO] Performing health check...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "curl -f http://localhost:8090/actuator/health --max-time 10" 2>nul
if !ERRORLEVEL! neq 0 (
echo [WARN] Health check failed, but application appears to be running
echo [INFO] Give it a few more minutes to fully start up
)
REM 10. Cleanup old backups
echo.
echo =============== Cleanup ===============
echo [INFO] Cleaning up old backups (keeping recent 7)...
ssh -o BatchMode=yes -o ConnectTimeout=10 !SERVER_USER!@!SERVER_IP! "cd !BACKUP_DIR!; ls -t !JAR_NAME!.backup.* 2>/dev/null | tail -n +8 | xargs rm -f 2>/dev/null || true; echo '[INFO] Backup cleanup completed'"
REM 11. Success
echo.
echo =============== Deployment Successful ===============
echo [SUCCESS] Deployment completed successfully!
echo [INFO] Deployment time: !date! !time!
echo [INFO] Backup created: !JAR_NAME!.backup.!BACKUP_TIMESTAMP!
echo [INFO] Server dashboard: http://!SERVER_IP!:8090/static/admin/batch-admin.html
echo [INFO] Server logs: ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh logs"
echo.
echo Quick commands:
echo server-status.bat - Check server status
echo server-logs.bat tail - Monitor logs
echo rollback.bat !BACKUP_TIMESTAMP! - Rollback if needed
goto :end
:rollback_option
echo.
echo =============== Deployment Failed ===============
echo [ERROR] Deployment failed!
echo.
set /p ROLLBACK="Attempt rollback to previous version? (y/N): "
if /i "!ROLLBACK!"=="y" (
echo [INFO] Attempting rollback...
if defined BACKUP_TIMESTAMP (
call rollback.bat !BACKUP_TIMESTAMP!
) else (
echo [ERROR] No backup timestamp available for rollback
echo [INFO] Manual recovery may be required
)
) else (
echo [INFO] Manual recovery required
echo [INFO] SSH to server: ssh !SERVER_USER!@!SERVER_IP!
echo [INFO] Check status: cd !SERVER_PATH! && ./vessel-batch-control.sh status
)
exit /b 1
:end
endlocal

파일 보기

@ -0,0 +1,47 @@
@echo off
REM ====================================
REM 조회 전용 서버 배포 스크립트 (10.29.17.90)
REM ====================================
echo ======================================
echo Query-Only Server Deployment Script
echo Target: 10.29.17.90
echo Profile: query
echo ======================================
REM 프로젝트 루트 디렉토리로 이동
cd /d %~dp0\..
REM 빌드
echo.
echo [1/3] Building project...
call mvn clean package -DskipTests
if %ERRORLEVEL% NEQ 0 (
echo Build failed!
pause
exit /b 1
)
echo.
echo [2/3] Stopping existing application...
REM SSH를 통해 원격 서버의 기존 프로세스 종료
ssh mpc@10.29.17.90 "pkill -f 'signal_batch.*query' || true"
echo.
echo [3/3] Deploying and starting application...
REM JAR 파일 복사
scp target\signal_batch-0.0.1-SNAPSHOT.jar mpc@10.29.17.90:/home/mpc/app/
REM 원격 서버에서 애플리케이션 시작 (query 프로파일)
ssh mpc@10.29.17.90 "cd /home/mpc/app && nohup java -jar signal_batch-0.0.1-SNAPSHOT.jar --spring.profiles.active=query > query-server.log 2>&1 &"
echo.
echo ======================================
echo Deployment completed!
echo Server: 10.29.17.90
echo Profile: query
echo Log: /home/mpc/app/query-server.log
echo ======================================
pause

237
scripts/deploy-safe.bat Normal file
파일 보기

@ -0,0 +1,237 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Safe Deploy Script
REM (with running application check)
REM ===============================================
setlocal enabledelayedexpansion
REM Configuration
set "SERVER_IP=10.26.252.48"
set "SERVER_USER=root"
set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
set "JAR_NAME=vessel-batch-aggregation.jar"
set "BACKUP_DIR=!SERVER_PATH!/backups"
echo ===============================================
echo Signal Batch Safe Deploy System
echo ===============================================
echo [INFO] Deploy Start: !date! !time!
echo [INFO] Target Server: !SERVER_IP!
echo.
REM Set working directory
cd /d "%~dp0.."
echo [INFO] Project directory: !CD!
REM 1. Check JAR file
echo.
echo =============== JAR File Check ===============
set "JAR_PATH=target\!JAR_NAME!"
if not exist "!JAR_PATH!" (
echo [ERROR] JAR file not found: !JAR_PATH!
echo [INFO] Please build the project first using IntelliJ Maven
pause
exit /b 1
)
for %%I in ("!JAR_PATH!") do (
echo [INFO] JAR File: %%~nxI
echo [INFO] File Size: %%~zI bytes
echo [INFO] Modified: %%~tI
)
REM 2. SSH Connection Test
echo.
echo =============== SSH Connection Test ===============
ssh !SERVER_USER!@!SERVER_IP! "echo 'SSH connection OK'" 2>nul
if !ERRORLEVEL! neq 0 (
echo [ERROR] SSH connection failed
pause
exit /b 1
)
echo [SUCCESS] SSH connection successful
REM 3. Check current application status
echo.
echo =============== Current Application Status ===============
echo [INFO] Checking if application is currently running...
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status" 2>nul
set APP_STATUS=!ERRORLEVEL!
if !APP_STATUS! equ 0 (
echo.
echo [WARNING] Application is currently RUNNING on the server!
echo.
echo =============== Deployment Options ===============
echo 1. Continue with deployment (stop → deploy → start)
echo 2. Cancel deployment (keep current version running)
echo 3. Check application details first
echo.
set /p DEPLOY_CHOICE="Choose option (1-3): "
if "!DEPLOY_CHOICE!"=="2" (
echo [INFO] Deployment cancelled by user
echo [INFO] Current application continues running
pause
exit /b 0
)
if "!DEPLOY_CHOICE!"=="3" (
echo.
echo =============== Application Details ===============
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
echo.
ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/health --max-time 5 2>/dev/null | python -m json.tool 2>/dev/null || echo 'Health endpoint not available'"
echo.
set /p FINAL_CHOICE="Proceed with deployment? (y/N): "
if /i not "!FINAL_CHOICE!"=="y" (
echo [INFO] Deployment cancelled
pause
exit /b 0
)
)
if not "!DEPLOY_CHOICE!"=="1" if not "!DEPLOY_CHOICE!"=="3" (
echo [ERROR] Invalid choice. Deployment cancelled.
pause
exit /b 1
)
echo.
echo [INFO] Proceeding with deployment...
echo [INFO] Current application will be stopped during deployment
) else (
echo [INFO] Application is not currently running
echo [INFO] Proceeding with fresh deployment
)
REM 4. Create backup timestamp
for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do if not "%%I"=="" set DATETIME=%%I
set BACKUP_TIMESTAMP=!DATETIME:~0,8!_!DATETIME:~8,6!
REM 5. Create backup (if existing JAR exists)
echo.
echo =============== Create Backup ===============
ssh !SERVER_USER!@!SERVER_IP! "mkdir -p !BACKUP_DIR!"
ssh !SERVER_USER!@!SERVER_IP! "
if [ -f !SERVER_PATH!/!JAR_NAME! ]; then
echo '[INFO] Creating backup of current version...'
cp !SERVER_PATH!/!JAR_NAME! !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!
echo '[SUCCESS] Backup created: !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!'
ls -la !BACKUP_DIR!/!JAR_NAME!.backup.!BACKUP_TIMESTAMP!
else
echo '[INFO] No existing JAR file to backup (first deployment)'
fi
"
REM 6. Stop application (if running)
if !APP_STATUS! equ 0 (
echo.
echo =============== Stop Current Application ===============
echo [INFO] Gracefully stopping current application...
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh stop"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to stop application gracefully
set /p FORCE_STOP="Force stop and continue? (y/N): "
if /i not "!FORCE_STOP!"=="y" (
echo [INFO] Deployment cancelled
exit /b 1
)
echo [INFO] Attempting force stop...
ssh !SERVER_USER!@!SERVER_IP! "pkill -f !JAR_NAME! || true"
)
echo [SUCCESS] Application stopped
)
REM 7. Deploy new JAR
echo.
echo =============== Deploy New Version ===============
echo [INFO] Transferring new JAR file...
scp "!JAR_PATH!" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
if !ERRORLEVEL! neq 0 (
echo [ERROR] File transfer failed
goto :deployment_failed
)
ssh !SERVER_USER!@!SERVER_IP! "chmod +x !SERVER_PATH!/!JAR_NAME!"
echo [SUCCESS] New version deployed
REM 8. Transfer version info
if exist "target\version.txt" (
scp "target\version.txt" !SERVER_USER!@!SERVER_IP!:!SERVER_PATH!/
)
REM 9. Start new application
echo.
echo =============== Start New Application ===============
echo [INFO] Starting new version...
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh start"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to start new application
goto :deployment_failed
)
REM 10. Verify deployment
echo.
echo =============== Verify Deployment ===============
echo [INFO] Waiting for application startup (30 seconds)...
timeout /t 30 /nobreak > nul
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
if !ERRORLEVEL! neq 0 (
echo [ERROR] New application is not running properly
goto :deployment_failed
)
echo [INFO] Performing health check...
ssh !SERVER_USER!@!SERVER_IP! "curl -f http://localhost:8090/actuator/health --max-time 10" 2>nul
if !ERRORLEVEL! neq 0 (
echo [WARN] Health check failed, but application is running
echo [INFO] Manual verification recommended
)
REM 11. Success
echo.
echo =============== Deployment Successful ===============
echo [SUCCESS] Safe deployment completed successfully!
echo [INFO] Deployment time: !date! !time!
echo [INFO] Backup: !JAR_NAME!.backup.!BACKUP_TIMESTAMP!
echo [INFO] Dashboard: http://!SERVER_IP!:8090/static/admin/batch-admin.html
echo.
echo Quick commands:
echo server-status.bat - Check status
echo server-logs.bat tail - Monitor logs
echo rollback.bat !BACKUP_TIMESTAMP! - Rollback if needed
goto :end
:deployment_failed
echo.
echo =============== Deployment Failed ===============
echo [ERROR] Deployment failed!
echo.
set /p AUTO_ROLLBACK="Attempt automatic rollback? (y/N): "
if /i "!AUTO_ROLLBACK!"=="y" (
if defined BACKUP_TIMESTAMP (
echo [INFO] Attempting rollback to: !BACKUP_TIMESTAMP!
call rollback.bat !BACKUP_TIMESTAMP!
) else (
echo [ERROR] No backup available for automatic rollback
)
) else (
echo [INFO] Manual recovery required
echo [INFO] Available backups:
ssh !SERVER_USER!@!SERVER_IP! "ls -la !BACKUP_DIR!/!JAR_NAME!.backup.* 2>/dev/null || echo 'No backups found'"
)
exit /b 1
:end
endlocal

파일 보기

@ -0,0 +1,139 @@
-- DataSource 문제 진단 SQL
-- 10.26.252.51과 10.29.17.90 양쪽에서 실행하여 비교
-- ============================================
-- 1. 현재 활성 연결 확인
-- ============================================
SELECT
pid,
usename,
application_name,
client_addr,
backend_start,
state,
query_start,
LEFT(query, 100) as current_query
FROM pg_stat_activity
WHERE datname IN ('mdadb', 'mpcdb2')
AND application_name LIKE '%vessel%'
ORDER BY backend_start DESC;
-- ============================================
-- 2. 최근 INSERT/UPDATE 통계 확인
-- ============================================
SELECT
schemaname,
tablename,
n_tup_ins as total_inserts,
n_tup_upd as total_updates,
n_tup_del as total_deletes,
n_live_tup as live_rows,
last_autoanalyze,
last_autovacuum
FROM pg_stat_user_tables
WHERE schemaname = 'signal'
AND tablename IN (
't_vessel_tracks_5min',
't_vessel_tracks_hourly',
't_vessel_tracks_daily',
't_abnormal_tracks',
't_vessel_latest_position'
)
ORDER BY n_tup_ins DESC;
-- ============================================
-- 3. 최근 데이터 확인 (마지막 INSERT 시간)
-- ============================================
-- 5분 집계
SELECT
'tracks_5min' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_vessel_tracks_5min;
-- 시간 집계
SELECT
'tracks_hourly' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_vessel_tracks_hourly;
-- 일 집계
SELECT
'tracks_daily' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_vessel_tracks_daily;
-- 비정상 궤적
SELECT
'abnormal_tracks' as table_name,
COUNT(*) as total_rows,
MAX(time_bucket) as last_time_bucket,
NOW() - MAX(time_bucket) as data_delay
FROM signal.t_abnormal_tracks;
-- 최신 위치
SELECT
'latest_position' as table_name,
COUNT(*) as total_rows,
MAX(last_update) as last_update,
NOW() - MAX(last_update) as data_delay
FROM signal.t_vessel_latest_position;
-- ============================================
-- 4. 특정 시간대 데이터 확인 (지난 1시간)
-- ============================================
SELECT
'5min_last_hour' as category,
COUNT(*) as count,
COUNT(DISTINCT sig_src_cd) as source_count,
COUNT(DISTINCT target_id) as vessel_count
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= NOW() - INTERVAL '1 hour';
SELECT
'hourly_last_day' as category,
COUNT(*) as count,
COUNT(DISTINCT sig_src_cd) as source_count,
COUNT(DISTINCT target_id) as vessel_count
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket >= NOW() - INTERVAL '1 day';
-- ============================================
-- 5. 테이블 크기 확인
-- ============================================
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) AS indexes_size
FROM pg_tables
WHERE schemaname = 'signal'
AND tablename IN (
't_vessel_tracks_5min',
't_vessel_tracks_hourly',
't_vessel_tracks_daily',
't_abnormal_tracks',
't_vessel_latest_position'
)
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- ============================================
-- 6. 샘플 데이터 확인 (최근 10개)
-- ============================================
SELECT
sig_src_cd,
target_id,
time_bucket,
point_count,
avg_speed,
max_speed
FROM signal.t_vessel_tracks_5min
ORDER BY time_bucket DESC
LIMIT 10;

파일 보기

@ -0,0 +1,24 @@
# application.yml 또는 application-prod.yml에 추가
# 실제 SQL 에러를 확인하기 위한 로깅 설정
logging:
level:
# PostgreSQL JDBC 드라이버 로그
org.postgresql: DEBUG
org.postgresql.Driver: DEBUG
# Spring JDBC 로그
org.springframework.jdbc: DEBUG
org.springframework.jdbc.core.JdbcTemplate: DEBUG
org.springframework.jdbc.core.StatementCreatorUtils: TRACE
# Spring Batch 로그
org.springframework.batch: DEBUG
# 배치 프로세서 로그
gc.mda.signal_batch.batch.processor: DEBUG
gc.mda.signal_batch.batch.processor.HourlyTrackProcessor: TRACE
gc.mda.signal_batch.batch.processor.DailyTrackProcessor: TRACE
# SQL 쿼리 파라미터 로깅
org.springframework.jdbc.core.namedparam: TRACE

파일 보기

@ -0,0 +1,122 @@
-- Invalid geometry 수정 스크립트
-- "Too few points" 에러를 해결하기 위해 단일 포인트를 2번 반복
-- ========================================
-- 1. 백업 (선택사항)
-- ========================================
-- CREATE TABLE signal.t_vessel_tracks_5min_backup_20251107 AS
-- SELECT * FROM signal.t_vessel_tracks_5min
-- WHERE track_geom IS NOT NULL AND NOT public.ST_IsValid(track_geom);
-- ========================================
-- 2. Invalid geometry 수정 (DRY RUN - 먼저 확인)
-- ========================================
SELECT
'DRY RUN - Will fix these records' as action,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as current_points,
public.ST_AsText(track_geom) as current_wkt,
-- 수정 후 WKT 미리보기
CASE
WHEN public.ST_NPoints(track_geom) = 1 THEN
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')'
ELSE 'NO FIX NEEDED'
END as new_wkt
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%'
LIMIT 10;
-- ========================================
-- 3. 실제 수정 (확인 후 실행)
-- ========================================
-- 주의: 이 쿼리는 실제 데이터를 변경합니다!
-- DRY RUN 결과를 확인한 후 주석을 해제하고 실행하세요.
/*
UPDATE signal.t_vessel_tracks_5min
SET track_geom = public.ST_GeomFromText(
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
4326
)
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) = 1
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
*/
-- ========================================
-- 4. 수정 결과 확인
-- ========================================
SELECT
'AFTER FIX' as status,
COUNT(*) as total_records,
COUNT(CASE WHEN public.ST_IsValid(track_geom) THEN 1 END) as valid_count,
COUNT(CASE WHEN NOT public.ST_IsValid(track_geom) THEN 1 END) as invalid_count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL;
-- ========================================
-- 5. 여전히 Invalid한 geometry 확인
-- ========================================
SELECT
'REMAINING INVALID' as status,
public.ST_IsValidReason(track_geom) as reason,
COUNT(*) as count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom)
GROUP BY public.ST_IsValidReason(track_geom);
-- ========================================
-- 6. Hourly 테이블도 동일하게 수정 (필요시)
-- ========================================
/*
UPDATE signal.t_vessel_tracks_hourly
SET track_geom = public.ST_GeomFromText(
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
4326
)
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) = 1
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
*/
-- ========================================
-- 7. Daily 테이블도 동일하게 수정 (필요시)
-- ========================================
/*
UPDATE signal.t_vessel_tracks_daily
SET track_geom = public.ST_GeomFromText(
'LINESTRING M(' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ',' ||
public.ST_X(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_Y(public.ST_PointN(track_geom, 1)) || ' ' ||
public.ST_M(public.ST_PointN(track_geom, 1)) || ')',
4326
)
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) = 1
AND public.ST_IsValidReason(track_geom) LIKE '%Too few points%';
*/

파일 보기

@ -0,0 +1,24 @@
# PostGIS 함수 스키마 명시 스크립트
# ST_GeomFromText -> public.ST_GeomFromText로 변경
$javaDir = "C:\Users\lht87\IdeaProjects\signal_batch\src\main\java"
$files = Get-ChildItem -Path $javaDir -Filter "*.java" -Recurse
$count = 0
foreach ($file in $files) {
$content = Get-Content $file.FullName -Raw -Encoding UTF8
# ST_GeomFromText를 public.ST_GeomFromText로 변경 (이미 public.가 붙어있지 않은 경우만)
$newContent = $content -replace '(?<!public\.)ST_GeomFromText\(', 'public.ST_GeomFromText('
# ST_Length도 변경
$newContent = $newContent -replace '(?<!public\.)ST_Length\(', 'public.ST_Length('
if ($content -ne $newContent) {
Set-Content -Path $file.FullName -Value $newContent -Encoding UTF8 -NoNewline
Write-Host "Updated: $($file.FullName)"
$count++
}
}
Write-Host "`nTotal files updated: $count"

파일 보기

@ -0,0 +1,223 @@
#!/bin/bash
# Spring Batch 메타데이터 강제 초기화 스크립트
# 실행 중인 작업 상태와 관계없이 강제로 초기화
echo "================================================"
echo "Spring Batch Metadata FORCE Reset"
echo "WARNING: This will FORCE delete ALL batch job history!"
echo " Including running jobs!"
echo "Time: $(date)"
echo "================================================"
# 색상 코드
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# 데이터베이스 연결 정보
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="mdadb"
DB_USER="mda"
DB_SCHEMA="public"
echo -e "${RED}⚠️ CRITICAL WARNING: This is a FORCE reset operation!${NC}"
echo "This will:"
echo "- Delete ALL batch job history"
echo "- Clear ALL running job states"
echo "- Reset ALL sequences"
echo "- Cannot be undone (except from backup)"
echo ""
echo -e "${YELLOW}This should only be used when normal reset fails!${NC}"
echo ""
read -p "Type 'FORCE RESET' to confirm: " CONFIRM
if [ "$CONFIRM" != "FORCE RESET" ]; then
echo "Operation cancelled."
exit 0
fi
echo ""
echo "1. Creating full backup before force reset..."
# 백업 디렉토리 생성
BACKUP_DIR="/devdata/apps/bridge-db-monitoring/backup"
mkdir -p $BACKUP_DIR
# 백업 파일명
BACKUP_FILE="$BACKUP_DIR/batch_metadata_FORCE_backup_$(date +%Y%m%d_%H%M%S).sql"
# 전체 메타데이터 백업 (스키마 포함)
pg_dump -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME \
--schema=$DB_SCHEMA \
--table="batch_*" \
--file=$BACKUP_FILE 2>/dev/null
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Full backup created: $BACKUP_FILE${NC}"
else
echo -e "${YELLOW}⚠ Backup may have failed, but continuing...${NC}"
fi
echo ""
echo "2. Stopping application if running..."
# PID 확인
if [ -f "/devdata/apps/bridge-db-monitoring/vessel-batch.pid" ]; then
PID=$(cat /devdata/apps/bridge-db-monitoring/vessel-batch.pid)
if kill -0 $PID 2>/dev/null; then
echo " Stopping application (PID: $PID)..."
kill -15 $PID
sleep 5
if kill -0 $PID 2>/dev/null; then
echo " Force killing application..."
kill -9 $PID
fi
fi
fi
echo ""
echo "3. FORCE resetting batch metadata tables..."
# CASCADE를 사용한 강제 초기화
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME << EOF
-- 트랜잭션 시작
BEGIN;
-- 외래 키 제약 임시 비활성화
SET session_replication_role = 'replica';
-- 모든 배치 테이블 강제 초기화
TRUNCATE TABLE $DB_SCHEMA.batch_step_execution_context CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_step_execution CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_execution_context CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_execution_params CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_execution CASCADE;
TRUNCATE TABLE $DB_SCHEMA.batch_job_instance CASCADE;
-- 시퀀스 강제 리셋
ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_job_execution_seq RESTART WITH 1;
ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_job_seq RESTART WITH 1;
ALTER SEQUENCE IF EXISTS $DB_SCHEMA.batch_step_execution_seq RESTART WITH 1;
-- 외래 키 제약 재활성화
SET session_replication_role = 'origin';
-- 커밋
COMMIT;
-- 통계 업데이트
ANALYZE $DB_SCHEMA.batch_job_instance;
ANALYZE $DB_SCHEMA.batch_job_execution;
ANALYZE $DB_SCHEMA.batch_job_execution_params;
ANALYZE $DB_SCHEMA.batch_job_execution_context;
ANALYZE $DB_SCHEMA.batch_step_execution;
ANALYZE $DB_SCHEMA.batch_step_execution_context;
EOF
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Batch metadata tables FORCE reset successfully${NC}"
else
echo -e "${RED}✗ Force reset encountered errors, but may have partially succeeded${NC}"
fi
echo ""
echo "4. Verifying force reset..."
# 각 테이블 개별 확인
for table in batch_job_instance batch_job_execution batch_job_execution_params batch_job_execution_context batch_step_execution batch_step_execution_context; do
COUNT=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -t -c "
SELECT COUNT(*) FROM $DB_SCHEMA.$table;" 2>/dev/null | xargs)
if [ -z "$COUNT" ]; then
COUNT="ERROR"
fi
if [ "$COUNT" = "0" ]; then
echo -e " ${GREEN}${NC} $table: $COUNT records"
elif [ "$COUNT" = "ERROR" ]; then
echo -e " ${RED}${NC} $table: Could not query"
else
echo -e " ${YELLOW}${NC} $table: $COUNT records remaining"
fi
done
echo ""
echo "5. Optional: Clear ALL aggregation data (complete fresh start)"
read -p "Do you want to clear ALL aggregation data too? (yes/no): " CLEAR_ALL
if [ "$CLEAR_ALL" = "yes" ]; then
echo ""
echo "Clearing ALL aggregation data..."
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME << EOF
BEGIN;
-- 강제로 모든 집계 데이터 초기화
SET session_replication_role = 'replica';
-- 최신 위치 정보
TRUNCATE TABLE signal.t_vessel_latest_position CASCADE;
-- 모든 파티션 테이블 초기화
DO \$\$
DECLARE
r RECORD;
BEGIN
FOR r IN
SELECT tablename
FROM pg_tables
WHERE schemaname = 'signal'
AND (tablename LIKE 't_tile_summary_%'
OR tablename LIKE 't_area_statistics_%'
OR tablename LIKE 't_vessel_daily_tracks_%')
LOOP
EXECUTE 'TRUNCATE TABLE signal.' || r.tablename || ' CASCADE';
RAISE NOTICE 'Truncated table: signal.%', r.tablename;
END LOOP;
END\$\$;
-- 배치 성능 메트릭
TRUNCATE TABLE signal.t_batch_performance_metrics CASCADE;
SET session_replication_role = 'origin';
COMMIT;
EOF
echo -e "${GREEN}✓ All aggregation data cleared${NC}"
fi
echo ""
echo "================================================"
echo "FORCE Reset Complete!"
echo ""
echo -e "${YELLOW}IMPORTANT: The application needs to be restarted!${NC}"
echo ""
echo "Next steps:"
echo "1. Start the application:"
echo " cd /devdata/apps/bridge-db-monitoring"
echo " ./run-on-query-server.sh"
echo ""
echo "2. Verify health:"
echo " curl http://localhost:8090/actuator/health"
echo ""
echo "3. Start fresh batch job:"
echo " curl -X POST http://localhost:8090/admin/batch/job/run \\"
echo " -H 'Content-Type: application/json' \\"
echo " -d '{\"jobName\": \"vesselAggregationJob\", \"parameters\": {\"tileLevel\": 1}}'"
echo ""
echo "Full backup saved to: $BACKUP_FILE"
echo "================================================"
# 자동 시작 옵션
echo ""
read -p "Do you want to start the application now? (yes/no): " START_NOW
if [ "$START_NOW" = "yes" ]; then
echo "Starting application..."
cd /devdata/apps/bridge-db-monitoring
./run-on-query-server.sh
fi

파일 보기

@ -0,0 +1,59 @@
-- PostGIS를 signal 스키마에 설치하는 스크립트
-- 10.29.17.90 서버의 mpcdb2 데이터베이스에서 실행
-- 방법 1: signal 스키마에 PostGIS extension 생성 (권장)
-- 이미 public에 설치되어 있다면, signal 스키마에 함수들을 복사하는 방식으로 접근
-- 현재 PostGIS 상태 확인
SELECT extname, extversion, nspname
FROM pg_extension e
JOIN pg_namespace n ON e.extnamespace = n.oid
WHERE extname LIKE 'post%';
-- 옵션 1: signal 스키마에 PostGIS 함수 wrapper 생성
-- (public 스키마의 함수를 호출하는 wrapper)
CREATE OR REPLACE FUNCTION signal.ST_GeomFromText(text)
RETURNS geometry
AS $$
SELECT public.ST_GeomFromText($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_GeomFromText(text, integer)
RETURNS geometry
AS $$
SELECT public.ST_GeomFromText($1, $2);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_Length(geometry)
RETURNS double precision
AS $$
SELECT public.ST_Length($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_MakeLine(geometry[])
RETURNS geometry
AS $$
SELECT public.ST_MakeLine($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
-- 자주 사용하는 다른 함수들도 추가
CREATE OR REPLACE FUNCTION signal.ST_X(geometry)
RETURNS double precision
AS $$
SELECT public.ST_X($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_Y(geometry)
RETURNS double precision
AS $$
SELECT public.ST_Y($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION signal.ST_M(geometry)
RETURNS double precision
AS $$
SELECT public.ST_M($1);
$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE;
-- 검증
SELECT signal.ST_GeomFromText('POINT(126.0 37.0)', 4326);

파일 보기

@ -0,0 +1,85 @@
-- 실패한 배치 Job 조회 및 분석
-- 1. 실패한 Job 목록 (최근 50개)
SELECT
'=== FAILED JOBS (Recent 50) ===' as category,
bje.JOB_EXECUTION_ID,
bji.JOB_NAME,
bje.START_TIME,
bje.END_TIME,
bje.STATUS,
bje.EXIT_CODE,
LEFT(bje.EXIT_MESSAGE, 100) as EXIT_MESSAGE_SHORT,
-- Job Parameters 표시
(SELECT string_agg(PARAMETER_NAME || '=' || PARAMETER_VALUE, ', ')
FROM BATCH_JOB_EXECUTION_PARAMS
WHERE JOB_EXECUTION_ID = bje.JOB_EXECUTION_ID
AND IDENTIFYING = 'Y') as PARAMETERS
FROM BATCH_JOB_EXECUTION bje
JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
WHERE bje.STATUS = 'FAILED'
ORDER BY bje.JOB_EXECUTION_ID DESC
LIMIT 50;
-- 2. 실패한 Step 상세 정보
SELECT
'=== FAILED STEPS ===' as category,
bse.STEP_EXECUTION_ID,
bse.JOB_EXECUTION_ID,
bji.JOB_NAME,
bse.STEP_NAME,
bse.STATUS,
bse.READ_COUNT,
bse.WRITE_COUNT,
bse.COMMIT_COUNT,
bse.ROLLBACK_COUNT,
bse.READ_SKIP_COUNT,
bse.PROCESS_SKIP_COUNT,
bse.WRITE_SKIP_COUNT,
LEFT(bse.EXIT_MESSAGE, 100) as EXIT_MESSAGE_SHORT
FROM BATCH_STEP_EXECUTION bse
JOIN BATCH_JOB_EXECUTION bje ON bse.JOB_EXECUTION_ID = bje.JOB_EXECUTION_ID
JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
WHERE bse.STATUS = 'FAILED'
ORDER BY bse.STEP_EXECUTION_ID DESC
LIMIT 50;
-- 3. Job 타입별 실패 통계
SELECT
'=== FAILURE STATISTICS BY JOB ===' as category,
bji.JOB_NAME,
COUNT(*) as FAILED_COUNT,
MAX(bje.END_TIME) as LAST_FAILURE_TIME
FROM BATCH_JOB_EXECUTION bje
JOIN BATCH_JOB_INSTANCE bji ON bje.JOB_INSTANCE_ID = bji.JOB_INSTANCE_ID
WHERE bje.STATUS = 'FAILED'
GROUP BY bji.JOB_NAME
ORDER BY FAILED_COUNT DESC;
-- 4. Step별 실패 통계
SELECT
'=== FAILURE STATISTICS BY STEP ===' as category,
STEP_NAME,
COUNT(*) as FAILED_COUNT,
MAX(END_TIME) as LAST_FAILURE_TIME
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'FAILED'
GROUP BY STEP_NAME
ORDER BY FAILED_COUNT DESC;
-- 5. 최근 24시간 실패 현황
SELECT
'=== LAST 24 HOURS ===' as category,
COUNT(*) as FAILED_JOBS_24H
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'FAILED'
AND START_TIME >= CURRENT_TIMESTAMP - INTERVAL '24 hours';
-- 6. 전체 상태 요약
SELECT
'=== STATUS SUMMARY ===' as category,
STATUS,
COUNT(*) as COUNT
FROM BATCH_JOB_EXECUTION
GROUP BY STATUS
ORDER BY COUNT DESC;

파일 보기

@ -0,0 +1,75 @@
-- 실패한 배치 Job과 Step을 ABANDONED 상태로 변경
-- 주의: 이 스크립트는 실패한 job을 강제로 종료시킵니다.
-- 재시도가 필요한 경우 이 스크립트를 실행하지 마세요.
-- 1. 현재 실패 상태 확인
SELECT
'=== BEFORE UPDATE ===' as status,
COUNT(*) as failed_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'FAILED';
SELECT
'=== BEFORE UPDATE ===' as status,
COUNT(*) as failed_steps
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'FAILED';
-- 2. 실패한 STEP을 ABANDONED로 변경
UPDATE BATCH_STEP_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: FAILED',
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS = 'FAILED';
-- 3. 실패한 JOB을 ABANDONED로 변경
UPDATE BATCH_JOB_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: FAILED',
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS = 'FAILED';
-- 4. 업데이트 후 상태 확인
SELECT
'=== AFTER UPDATE ===' as status,
COUNT(*) as failed_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'FAILED';
SELECT
'=== AFTER UPDATE ===' as status,
COUNT(*) as failed_steps
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'FAILED';
SELECT
'=== ABANDONED COUNT ===' as status,
COUNT(*) as abandoned_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'ABANDONED';
SELECT
'=== ABANDONED COUNT ===' as status,
COUNT(*) as abandoned_steps
FROM BATCH_STEP_EXECUTION
WHERE STATUS = 'ABANDONED';
-- 5. 최근 ABANDONED 처리된 Job 목록 확인
SELECT
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE,
EXIT_MESSAGE
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'ABANDONED'
ORDER BY JOB_EXECUTION_ID DESC
LIMIT 10;

파일 보기

@ -0,0 +1,75 @@
-- 특정 JOB_EXECUTION_ID를 ABANDONED로 변경
-- 사용법: :job_execution_id 를 실제 ID로 변경 후 실행
-- 변수 설정 (PostgreSQL에서는 psql 변수 사용)
-- psql -v job_id=12345 -f mark-specific-job-as-abandoned.sql
-- 또는 아래 :job_execution_id 를 직접 숫자로 변경
-- 1. 해당 Job 상태 확인
SELECT
'=== BEFORE UPDATE ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE,
EXIT_MESSAGE
FROM BATCH_JOB_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id;
-- 2. 해당 Job의 Step들 상태 확인
SELECT
'=== STEPS BEFORE UPDATE ===' as status,
STEP_EXECUTION_ID,
STEP_NAME,
STATUS,
EXIT_CODE
FROM BATCH_STEP_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id
ORDER BY STEP_EXECUTION_ID;
-- 3. Step을 ABANDONED로 변경
UPDATE BATCH_STEP_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: ' || STATUS,
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE JOB_EXECUTION_ID = :job_execution_id
AND STATUS IN ('FAILED', 'STARTED', 'STOPPING');
-- 4. Job을 ABANDONED로 변경
UPDATE BATCH_JOB_EXECUTION
SET
STATUS = 'ABANDONED',
EXIT_CODE = 'ABANDONED',
EXIT_MESSAGE = 'Manually marked as ABANDONED - Original status: ' || STATUS,
END_TIME = COALESCE(END_TIME, CURRENT_TIMESTAMP),
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE JOB_EXECUTION_ID = :job_execution_id
AND STATUS IN ('FAILED', 'STARTED', 'STOPPING');
-- 5. 업데이트 결과 확인
SELECT
'=== AFTER UPDATE ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE,
EXIT_MESSAGE
FROM BATCH_JOB_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id;
SELECT
'=== STEPS AFTER UPDATE ===' as status,
STEP_EXECUTION_ID,
STEP_NAME,
STATUS,
EXIT_CODE
FROM BATCH_STEP_EXECUTION
WHERE JOB_EXECUTION_ID = :job_execution_id
ORDER BY STEP_EXECUTION_ID;

파일 보기

@ -0,0 +1,212 @@
#!/bin/bash
# Query DB 서버 리소스 모니터링 스크립트
# PostgreSQL과 배치 애플리케이션 리소스 경합 모니터링
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
# Java 경로 (jstat 명령어용)
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JSTAT="$JAVA_HOME/bin/jstat"
# 색상 코드
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# CSV 헤더 생성 (첫 실행 시)
if [ ! -f "$LOG_DIR/resource-monitor.csv" ]; then
echo "timestamp,pg_cpu,java_cpu,delay_minutes,throughput,collect_connections" > $LOG_DIR/resource-monitor.csv
fi
while true; do
clear
echo "========================================="
echo "Vessel Batch Resource Monitor"
echo "Time: $(date)"
echo "App Home: $APP_HOME"
echo "========================================="
# PID 파일에서 프로세스 ID 읽기
if [ -f "$APP_HOME/vessel-batch.pid" ]; then
JAVA_PID=$(cat $APP_HOME/vessel-batch.pid)
else
JAVA_PID=$(pgrep -f "vessel-batch-aggregation.jar")
fi
# 1. CPU 사용률
echo -e "\n${GREEN}[CPU Usage]${NC}"
# PostgreSQL CPU 사용률
PG_CPU=$(ps aux | grep postgres | grep -v grep | awk '{sum+=$3} END {printf "%.1f", sum}' || echo "0")
if [ -z "$PG_CPU" ]; then PG_CPU="0"; fi
echo "PostgreSQL Total: ${PG_CPU}%"
# Java 배치 CPU 사용률
if [ ! -z "$JAVA_PID" ] && kill -0 $JAVA_PID 2>/dev/null; then
JAVA_CPU=$(ps aux | grep $JAVA_PID | grep -v grep | awk '{printf "%.1f", $3}' || echo "0")
if [ -z "$JAVA_CPU" ]; then JAVA_CPU="0"; fi
echo "Batch Application: ${JAVA_CPU}% (PID: $JAVA_PID)"
else
JAVA_CPU="0.0"
echo "Batch Application: Not Running"
fi
# Top 5 PostgreSQL 프로세스
echo -e "\nTop PostgreSQL Processes:"
ps aux | grep postgres | grep -v grep | sort -k3 -nr | head -5 | awk '{printf " %-8s %5s%% %s\n", $2, $3, $11}'
# 2. 메모리 사용률
echo -e "\n${GREEN}[Memory Usage]${NC}"
free -h | grep -E "Mem|Swap"
# PostgreSQL 공유 메모리
PG_SHARED=$(ipcs -m 2>/dev/null | grep postgres | awk '{sum+=$5} END {printf "%.1f", sum/1024/1024/1024}')
if [ ! -z "$PG_SHARED" ]; then
echo "PostgreSQL Shared Memory: ${PG_SHARED}GB"
fi
# Java 힙 사용률
if [ ! -z "$JAVA_PID" ] && kill -0 $JAVA_PID 2>/dev/null; then
if [ -x "$JSTAT" ]; then
JAVA_HEAP=$($JSTAT -gc $JAVA_PID 2>/dev/null | tail -1 | awk '{printf "%.1f", ($3+$4+$6+$8)/1024}')
if [ ! -z "$JAVA_HEAP" ]; then
echo "Java Heap Used: ${JAVA_HEAP}MB"
fi
fi
fi
# 3. 디스크 I/O
echo -e "\n${GREEN}[Disk I/O]${NC}"
iostat -x 1 2 2>/dev/null | grep -A5 "Device" | tail -n +7 | head -5
# 4. PostgreSQL 연결 상태
echo -e "\n${GREEN}[Database Connections]${NC}"
# psql 명령어가 PATH에 없을 수 있으므로 전체 경로 사용 시도
if command -v psql >/dev/null 2>&1; then
PSQL_CMD="psql"
else
# 일반적인 PostgreSQL 설치 경로들
for path in /usr/pgsql-*/bin/psql /usr/bin/psql /usr/local/bin/psql; do
if [ -x "$path" ]; then
PSQL_CMD="$path"
break
fi
done
fi
if [ ! -z "$PSQL_CMD" ]; then
$PSQL_CMD -h localhost -U mda -d mdadb -c "
SELECT
application_name,
client_addr,
COUNT(*) as connections,
string_agg(DISTINCT state, ', ') as states
FROM pg_stat_activity
WHERE datname = 'mdadb'
GROUP BY application_name, client_addr
ORDER BY connections DESC
LIMIT 10;" 2>/dev/null || echo "Unable to query database connections"
else
echo "psql command not found"
fi
# 5. 배치 처리 상태
echo -e "\n${GREEN}[Batch Processing Status]${NC}"
if [ ! -z "$PSQL_CMD" ]; then
# 처리 지연 확인
DELAY=$($PSQL_CMD -h localhost -U mda -d mdadb -t -c "
SELECT COALESCE(EXTRACT(EPOCH FROM (NOW() - MAX(last_update))) / 60, 0)::numeric(10,1)
FROM signal.t_vessel_latest_position;" 2>/dev/null | xargs)
if [ ! -z "$DELAY" ] && [ "$DELAY" != "" ]; then
if [ $(echo "$DELAY > 120" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${RED}Processing Delay: ${DELAY} minutes ⚠️${NC}"
elif [ $(echo "$DELAY > 60" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${YELLOW}Processing Delay: ${DELAY} minutes ⚠️${NC}"
else
echo -e "${GREEN}Processing Delay: ${DELAY} minutes ✓${NC}"
fi
else
DELAY="0"
echo "Processing Delay: Unable to determine"
fi
# 최근 처리량
THROUGHPUT=$($PSQL_CMD -h localhost -U mda -d mdadb -t -c "
SELECT COALESCE(COUNT(*), 0)
FROM signal.t_vessel_latest_position
WHERE last_update > NOW() - INTERVAL '1 minute';" 2>/dev/null | xargs)
if [ ! -z "$THROUGHPUT" ]; then
echo "Throughput: ${THROUGHPUT} vessels/minute"
else
THROUGHPUT="0"
echo "Throughput: Unable to determine"
fi
else
DELAY="0"
THROUGHPUT="0"
echo "Database metrics unavailable (psql not found)"
fi
# 6. 네트워크 연결 (수집 DB)
echo -e "\n${GREEN}[Network to Collect DB]${NC}"
COLLECT_CONN=$(ss -tunp 2>/dev/null | grep :5432 | grep 10.26.252.39 | wc -l)
echo "Active connections to collect DB: ${COLLECT_CONN}"
# 네트워크 통계
if [ "$COLLECT_CONN" -gt 0 ]; then
ss -i dst 10.26.252.39:5432 2>/dev/null | grep -E "rtt|cwnd" | head -3
fi
# 7. 애플리케이션 로그 최근 에러
echo -e "\n${GREEN}[Recent Application Errors]${NC}"
if [ -f "$LOG_DIR/app.log" ]; then
ERROR_COUNT=$(grep -c "ERROR" $LOG_DIR/app.log 2>/dev/null || echo 0)
echo "Total Errors in Log: $ERROR_COUNT"
# 최근 5개 에러 표시
if [ "$ERROR_COUNT" -gt 0 ]; then
echo "Recent Errors:"
grep "ERROR" $LOG_DIR/app.log | tail -5 | cut -c1-120
fi
else
echo "Log file not found at $LOG_DIR/app.log"
fi
# 8. 경고 사항
echo -e "\n${YELLOW}[Warnings]${NC}"
# CPU 경고
TOTAL_CPU=$(echo "$PG_CPU + $JAVA_CPU" | bc 2>/dev/null || echo "0")
if [ ! -z "$TOTAL_CPU" ] && [ "$TOTAL_CPU" != "0" ]; then
if [ $(echo "$TOTAL_CPU > 80" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${RED}⚠ High CPU usage: ${TOTAL_CPU}%${NC}"
fi
fi
# 메모리 경고
MEM_AVAILABLE=$(free -g | grep Mem | awk '{print $7}')
if [ ! -z "$MEM_AVAILABLE" ] && [ "$MEM_AVAILABLE" -lt 10 ]; then
echo -e "${RED}⚠ Low available memory: ${MEM_AVAILABLE}GB${NC}"
fi
# 처리 지연 경고
if [ ! -z "$DELAY" ] && [ "$DELAY" != "0" ]; then
if [ $(echo "$DELAY > 120" | bc 2>/dev/null || echo 0) -eq 1 ]; then
echo -e "${RED}⚠ Processing delay exceeds 2 hours!${NC}"
fi
fi
# 로그에 기록
echo "$(date '+%Y-%m-%d %H:%M:%S'),${PG_CPU},${JAVA_CPU},${DELAY},${THROUGHPUT},${COLLECT_CONN}" >> $LOG_DIR/resource-monitor.csv
# 다음 업데이트까지 대기
echo -e "\n${GREEN}Next update in 30 seconds... (Ctrl+C to exit)${NC}"
sleep 30
done

154
scripts/monitor-realtime.sh Normal file
파일 보기

@ -0,0 +1,154 @@
#!/bin/bash
# 실시간 시스템 모니터링 스크립트
# 부하 테스트 중 시스템 상태를 실시간으로 모니터링
# 색상 정의
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# 애플리케이션 정보
APP_HOST="10.26.252.48"
APP_PORT="8090"
DB_HOST_COLLECT="10.26.252.39"
DB_HOST_QUERY="10.26.252.48"
DB_PORT="5432"
DB_NAME="mdadb"
DB_USER="mdauser"
# 화면 지우기
clear_screen() {
clear
}
# 헤더 출력
print_header() {
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} 선박 궤적 시스템 실시간 모니터링 ${NC}"
echo -e "${BLUE}========================================${NC}"
echo -e "시간: $(date '+%Y-%m-%d %H:%M:%S')"
echo ""
}
# 애플리케이션 상태 확인
check_app_status() {
echo -e "${GREEN}[애플리케이션 상태]${NC}"
# Health check
health=$(curl -s "http://$APP_HOST:$APP_PORT/actuator/health" | jq -r '.status' 2>/dev/null || echo "UNKNOWN")
if [ "$health" == "UP" ]; then
echo -e "상태: ${GREEN}$health${NC}"
else
echo -e "상태: ${RED}$health${NC}"
fi
# 실행 중인 Job
running_jobs=$(curl -s "http://$APP_HOST:$APP_PORT/admin/batch/job/running" | jq -r '.[]' 2>/dev/null || echo "N/A")
echo -e "실행 중인 Job: $running_jobs"
# 메트릭 요약
metrics=$(curl -s "http://$APP_HOST:$APP_PORT/admin/metrics/summary" 2>/dev/null)
if [ ! -z "$metrics" ]; then
echo -e "처리된 레코드: $(echo $metrics | jq -r '.processedRecords // "N/A"')"
echo -e "평균 처리 시간: $(echo $metrics | jq -r '.avgProcessingTime // "N/A"')ms"
fi
echo ""
}
# 시스템 리소스 모니터링
check_system_resources() {
echo -e "${GREEN}[시스템 리소스]${NC}"
# CPU 사용률
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
echo -e "CPU 사용률: ${cpu_usage}%"
# 메모리 사용률
mem_info=$(free -g | grep "Mem:")
mem_total=$(echo $mem_info | awk '{print $2}')
mem_used=$(echo $mem_info | awk '{print $3}')
mem_percent=$(awk "BEGIN {printf \"%.1f\", ($mem_used/$mem_total)*100}")
echo -e "메모리: ${mem_used}GB / ${mem_total}GB (${mem_percent}%)"
# 디스크 사용률
disk_usage=$(df -h / | tail -1 | awk '{print $5}')
echo -e "디스크 사용률: $disk_usage"
echo ""
}
# 데이터베이스 연결 모니터링
check_db_connections() {
echo -e "${GREEN}[데이터베이스 연결]${NC}"
# CollectDB 연결
collect_conn=$(PGPASSWORD=$DB_PASS psql -h $DB_HOST_COLLECT -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null || echo "N/A")
echo -e "CollectDB 연결: $collect_conn"
# QueryDB 연결
query_conn=$(PGPASSWORD=$DB_PASS psql -h $DB_HOST_QUERY -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null || echo "N/A")
echo -e "QueryDB 연결: $query_conn"
echo ""
}
# WebSocket 연결 모니터링
check_websocket_status() {
echo -e "${GREEN}[WebSocket 상태]${NC}"
ws_status=$(curl -s "http://$APP_HOST:$APP_PORT/api/websocket/status" 2>/dev/null)
if [ ! -z "$ws_status" ]; then
echo -e "활성 세션: $(echo $ws_status | jq -r '.activeSessions // "N/A"')"
echo -e "활성 쿼리: $(echo $ws_status | jq -r '.activeQueries // "N/A"')"
echo -e "처리된 메시지: $(echo $ws_status | jq -r '.totalMessagesProcessed // "N/A"')"
else
echo -e "WebSocket 상태를 가져올 수 없습니다."
fi
echo ""
}
# 성능 최적화 상태
check_performance_status() {
echo -e "${GREEN}[성능 최적화 상태]${NC}"
perf_status=$(curl -s "http://$APP_HOST:$APP_PORT/api/v1/performance/status" 2>/dev/null)
if [ ! -z "$perf_status" ]; then
echo -e "동적 청크 크기: $(echo $perf_status | jq -r '.currentChunkSize // "N/A"')"
echo -e "캐시 히트율: $(echo $perf_status | jq -r '.cacheHitRate // "N/A"')%"
echo -e "메모리 사용률: $(echo $perf_status | jq -r '.memoryUsage.usedPercentage // "N/A"')%"
else
echo -e "성능 상태를 가져올 수 없습니다."
fi
echo ""
}
# 실시간 로그 tail (별도 터미널에서 실행)
tail_logs() {
echo -e "${GREEN}[최근 로그]${NC}"
echo "애플리케이션 로그는 별도 터미널에서 확인하세요:"
echo "tail -f /path/to/application.log"
echo ""
}
# 메인 루프
main() {
while true; do
clear_screen
print_header
check_app_status
check_system_resources
check_db_connections
check_websocket_status
check_performance_status
echo -e "${YELLOW}5초 후 갱신... (Ctrl+C로 종료)${NC}"
sleep 5
done
}
# 트랩 설정
trap 'echo -e "\n${RED}모니터링 종료${NC}"; exit 0' INT TERM
# 실행
main

파일 보기

@ -0,0 +1,50 @@
-- 빠른 Invalid Geometry 확인
-- 1. t_vessel_tracks_5min에 실제로 invalid geometry가 있는가?
SELECT
'5min table - invalid count' as check_type,
COUNT(*) as invalid_count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom);
-- 2. 어떤 invalid 이유인가?
SELECT
'5min table - invalid reasons' as check_type,
public.ST_IsValidReason(track_geom) as reason,
COUNT(*) as count
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom)
GROUP BY public.ST_IsValidReason(track_geom);
-- 3. 실제 invalid 샘플 확인
SELECT
'5min table - invalid samples' as check_type,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as point_count,
public.ST_AsText(track_geom) as wkt,
public.ST_IsValidReason(track_geom) as reason
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND NOT public.ST_IsValid(track_geom)
LIMIT 5;
-- 4. 에러 발생한 선박 확인 (vessel 000001_###0000072)
SELECT
'Problem vessel check' as check_type,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as point_count,
public.ST_IsValid(track_geom) as is_valid,
public.ST_IsValidReason(track_geom) as reason,
public.ST_AsText(track_geom) as wkt
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = '000001'
AND target_id LIKE '%0000072'
AND time_bucket >= CURRENT_TIMESTAMP - INTERVAL '1 day'
ORDER BY time_bucket DESC
LIMIT 10;

파일 보기

@ -0,0 +1,269 @@
-- ========================================
-- 실제 데이터로 즉시 테스트 (변수 없음)
-- 최근 데이터 자동 선택
-- ========================================
-- 1. 최근 1시간 내 데이터가 있는 선박 자동 선택
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== AUTO SELECTED VESSEL ===' as section,
sig_src_cd,
target_id,
hour_bucket,
hour_bucket + INTERVAL '1 hour' as hour_end
FROM recent_vessel;
-- 2. 선택된 선박의 5분 데이터 확인
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== 5MIN DATA ===' as section,
t.sig_src_cd,
t.target_id,
t.time_bucket,
public.ST_NPoints(t.track_geom) as points,
public.ST_IsValid(t.track_geom) as is_valid,
LENGTH(public.ST_AsText(t.track_geom)) as wkt_length,
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)') as extracted_coords
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
ORDER BY t.time_bucket;
-- 3. string_agg 테스트
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== STRING_AGG RESULT ===' as section,
t.sig_src_cd,
t.target_id,
string_agg(
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
','
ORDER BY t.time_bucket
) FILTER (WHERE t.track_geom IS NOT NULL) as all_coords,
COUNT(*) as track_count,
LENGTH(string_agg(
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
','
ORDER BY t.time_bucket
) FILTER (WHERE t.track_geom IS NOT NULL)) as coords_total_length
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
GROUP BY t.sig_src_cd, t.target_id;
-- 4. Geometry 생성 테스트
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
),
merged_coords AS (
SELECT
t.sig_src_cd,
t.target_id,
string_agg(
substring(public.ST_AsText(t.track_geom) from 'M \\((.+)\\)'),
','
ORDER BY t.time_bucket
) FILTER (WHERE t.track_geom IS NOT NULL) as all_coords
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
GROUP BY t.sig_src_cd, t.target_id
)
SELECT
'=== GEOMETRY CREATION TEST ===' as section,
sig_src_cd,
target_id,
all_coords IS NOT NULL as has_coords,
LENGTH(all_coords) as coords_length,
public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326) as merged_geom,
public.ST_NPoints(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as merged_points,
public.ST_IsValid(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as is_valid
FROM merged_coords;
-- 5. 전체 집계 쿼리 실행 (실제 HourlyTrackProcessor와 동일)
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
),
ordered_tracks AS (
SELECT t.*
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour'
AND t.track_geom IS NOT NULL
AND public.ST_NPoints(t.track_geom) > 0
ORDER BY t.time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
rv.hour_bucket as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
CROSS JOIN recent_vessel rv
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== FULL AGGREGATION RESULT ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
time_diff_seconds
FROM calculated_tracks;
-- 6. 에러 발생 가능성 체크
WITH recent_vessel AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', MIN(time_bucket)) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY DATE_TRUNC('hour', MIN(time_bucket)) DESC
LIMIT 1
)
SELECT
'=== ERROR CHECK ===' as section,
COUNT(*) as total_tracks,
COUNT(CASE WHEN track_geom IS NULL THEN 1 END) as null_geom_count,
COUNT(CASE WHEN NOT public.ST_IsValid(track_geom) THEN 1 END) as invalid_geom_count,
COUNT(CASE WHEN public.ST_NPoints(track_geom) = 0 THEN 1 END) as zero_points_count,
COUNT(CASE WHEN public.ST_NPoints(track_geom) = 1 THEN 1 END) as single_point_count,
COUNT(CASE WHEN
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)') IS NULL
THEN 1 END) as regex_fail_count
FROM signal.t_vessel_tracks_5min t
INNER JOIN recent_vessel rv ON t.sig_src_cd = rv.sig_src_cd AND t.target_id = rv.target_id
WHERE t.time_bucket >= rv.hour_bucket
AND t.time_bucket < rv.hour_bucket + INTERVAL '1 hour';
-- ========================================
-- 사용 방법:
-- 1. 그냥 전체 스크립트 실행
-- 2. 자동으로 최근 선박 선택됨
-- 3. 각 섹션별 결과 확인
--
-- 에러 발생시 확인 사항:
-- - "ERROR CHECK" 섹션에서 이상값 확인
-- - "STRING_AGG RESULT"에서 all_coords 확인
-- - "GEOMETRY CREATION TEST"에서 is_valid 확인
-- ========================================

288
scripts/run-load-test.sh Normal file
파일 보기

@ -0,0 +1,288 @@
#!/bin/bash
# 선박 궤적 집계 시스템 부하 테스트 실행 스크립트
# 실행 전 JMeter가 설치되어 있어야 합니다.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
JMETER_HOME="${JMETER_HOME:-/opt/jmeter}"
RESULTS_DIR="$PROJECT_ROOT/load-test-results"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# 색상 정의
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# 함수: 메시지 출력
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# JMeter 설치 확인
check_jmeter() {
if [ ! -d "$JMETER_HOME" ]; then
log_error "JMeter가 설치되어 있지 않습니다. JMETER_HOME을 설정하세요."
exit 1
fi
if [ ! -f "$JMETER_HOME/bin/jmeter" ]; then
log_error "JMeter 실행 파일을 찾을 수 없습니다: $JMETER_HOME/bin/jmeter"
exit 1
fi
log_info "JMeter 경로: $JMETER_HOME"
}
# 결과 디렉토리 생성
create_results_dir() {
mkdir -p "$RESULTS_DIR/$TIMESTAMP"
log_info "결과 디렉토리 생성: $RESULTS_DIR/$TIMESTAMP"
}
# 시스템 상태 모니터링 시작
start_monitoring() {
log_info "시스템 모니터링 시작..."
# CPU, 메모리, 네트워크 사용률 모니터링
nohup vmstat 5 > "$RESULTS_DIR/$TIMESTAMP/vmstat.log" 2>&1 &
VMSTAT_PID=$!
nohup iostat -x 5 > "$RESULTS_DIR/$TIMESTAMP/iostat.log" 2>&1 &
IOSTAT_PID=$!
# 데이터베이스 연결 모니터링
nohup watch -n 5 "psql -h 10.26.252.48 -U mdauser -d mdadb -c 'SELECT count(*) FROM pg_stat_activity;'" > "$RESULTS_DIR/$TIMESTAMP/db_connections.log" 2>&1 &
DB_MON_PID=$!
echo "$VMSTAT_PID $IOSTAT_PID $DB_MON_PID" > "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
}
# 시스템 모니터링 중지
stop_monitoring() {
log_info "시스템 모니터링 중지..."
if [ -f "$RESULTS_DIR/$TIMESTAMP/monitoring.pids" ]; then
while read pid; do
kill $pid 2>/dev/null
done < "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
rm "$RESULTS_DIR/$TIMESTAMP/monitoring.pids"
fi
}
# JMeter 테스트 실행
run_jmeter_test() {
local test_file=$1
local test_name=$(basename "$test_file" .jmx)
log_info "JMeter 테스트 실행: $test_name"
# JMeter 실행
"$JMETER_HOME/bin/jmeter" \
-n \
-t "$test_file" \
-l "$RESULTS_DIR/$TIMESTAMP/${test_name}-results.jtl" \
-e \
-o "$RESULTS_DIR/$TIMESTAMP/${test_name}-report" \
-Jjmeter.save.saveservice.output_format=csv \
-Jjmeter.save.saveservice.assertion_results_failure_message=true \
-Jjmeter.save.saveservice.data_type=true \
-Jjmeter.save.saveservice.label=true \
-Jjmeter.save.saveservice.response_code=true \
-Jjmeter.save.saveservice.response_data.on_error=true \
-Jjmeter.save.saveservice.response_message=true \
-Jjmeter.save.saveservice.successful=true \
-Jjmeter.save.saveservice.thread_name=true \
-Jjmeter.save.saveservice.time=true \
-Jjmeter.save.saveservice.connect_time=true \
-Jjmeter.save.saveservice.latency=true \
-Jjmeter.save.saveservice.bytes=true \
-Jjmeter.save.saveservice.sent_bytes=true \
-Jjmeter.save.saveservice.url=true
if [ $? -eq 0 ]; then
log_info "테스트 완료: $test_name"
log_info "결과 파일: $RESULTS_DIR/$TIMESTAMP/${test_name}-results.jtl"
log_info "HTML 리포트: $RESULTS_DIR/$TIMESTAMP/${test_name}-report/index.html"
else
log_error "테스트 실패: $test_name"
return 1
fi
}
# WebSocket 부하 테스트
run_websocket_test() {
log_info "WebSocket 부하 테스트 준비..."
# Python 스크립트로 WebSocket 테스트 실행
cat > "$RESULTS_DIR/$TIMESTAMP/websocket_load_test.py" << 'EOF'
import asyncio
import websockets
import json
import time
from datetime import datetime, timedelta
import statistics
class WebSocketLoadTester:
def __init__(self, base_url, num_clients, queries_per_client):
self.base_url = base_url
self.num_clients = num_clients
self.queries_per_client = queries_per_client
self.metrics = {
'total_queries': 0,
'successful_queries': 0,
'failed_queries': 0,
'latencies': [],
'throughput': []
}
async def client_session(self, client_id):
async with websockets.connect(f"{self.base_url}/ws-tracks") as websocket:
for query_id in range(self.queries_per_client):
try:
# 쿼리 요청 생성
query = {
"startTime": (datetime.now() - timedelta(days=7)).isoformat(),
"endTime": datetime.now().isoformat(),
"viewport": {
"minLon": 124.0,
"maxLon": 132.0,
"minLat": 33.0,
"maxLat": 38.0
},
"chunkSize": 1000
}
start_time = time.time()
await websocket.send(json.dumps(query))
# 응답 수신
chunks_received = 0
while True:
response = await websocket.recv()
data = json.loads(response)
chunks_received += 1
if data.get('isLastChunk', False):
break
end_time = time.time()
latency = (end_time - start_time) * 1000 # ms
self.metrics['latencies'].append(latency)
self.metrics['successful_queries'] += 1
print(f"Client {client_id} - Query {query_id}: {latency:.2f}ms, {chunks_received} chunks")
except Exception as e:
print(f"Client {client_id} - Query {query_id} failed: {str(e)}")
self.metrics['failed_queries'] += 1
self.metrics['total_queries'] += 1
await asyncio.sleep(1) # 쿼리 간 딜레이
async def run_test(self):
print(f"Starting WebSocket load test with {self.num_clients} clients...")
start_time = time.time()
# 모든 클라이언트 동시 실행
tasks = []
for i in range(self.num_clients):
task = asyncio.create_task(self.client_session(i))
tasks.append(task)
await asyncio.gather(*tasks)
end_time = time.time()
total_duration = end_time - start_time
# 결과 분석
print("\n=== 부하 테스트 결과 ===")
print(f"총 실행 시간: {total_duration:.2f}초")
print(f"총 쿼리 수: {self.metrics['total_queries']}")
print(f"성공: {self.metrics['successful_queries']}")
print(f"실패: {self.metrics['failed_queries']}")
if self.metrics['latencies']:
print(f"평균 레이턴시: {statistics.mean(self.metrics['latencies']):.2f}ms")
print(f"최소 레이턴시: {min(self.metrics['latencies']):.2f}ms")
print(f"최대 레이턴시: {max(self.metrics['latencies']):.2f}ms")
print(f"중앙값 레이턴시: {statistics.median(self.metrics['latencies']):.2f}ms")
print(f"처리량: {self.metrics['total_queries'] / total_duration:.2f} queries/sec")
if __name__ == "__main__":
tester = WebSocketLoadTester(
base_url="ws://10.26.252.48:8090",
num_clients=10,
queries_per_client=5
)
asyncio.run(tester.run_test())
EOF
# Python WebSocket 테스트 실행
if command -v python3 &> /dev/null; then
python3 "$RESULTS_DIR/$TIMESTAMP/websocket_load_test.py" > "$RESULTS_DIR/$TIMESTAMP/websocket_test_results.log" 2>&1
else
log_warn "Python3가 설치되어 있지 않아 WebSocket 테스트를 건너뜁니다."
fi
}
# 메인 실행 함수
main() {
log_info "선박 궤적 집계 시스템 부하 테스트 시작"
log_info "타임스탬프: $TIMESTAMP"
# JMeter 확인
check_jmeter
# 결과 디렉토리 생성
create_results_dir
# 시스템 모니터링 시작
start_monitoring
# 애플리케이션 상태 확인
log_info "애플리케이션 상태 확인..."
curl -s "http://10.26.252.48:8090/actuator/health" > "$RESULTS_DIR/$TIMESTAMP/app_health_before.json"
# JMeter 테스트 실행
if [ -f "$PROJECT_ROOT/src/main/resources/jmeter/comprehensive-load-test.jmx" ]; then
run_jmeter_test "$PROJECT_ROOT/src/main/resources/jmeter/comprehensive-load-test.jmx"
fi
# WebSocket 테스트 실행
run_websocket_test
# 10분간 부하 테스트 실행
log_info "부하 테스트 진행 중... (10분)"
sleep 600
# 시스템 모니터링 중지
stop_monitoring
# 최종 애플리케이션 상태 확인
curl -s "http://10.26.252.48:8090/actuator/health" > "$RESULTS_DIR/$TIMESTAMP/app_health_after.json"
# 결과 요약
log_info "부하 테스트 완료!"
log_info "결과 디렉토리: $RESULTS_DIR/$TIMESTAMP"
# 간단한 결과 분석
if [ -f "$RESULTS_DIR/$TIMESTAMP/comprehensive-load-test-results.jtl" ]; then
log_info "JMeter 결과 요약:"
awk -F',' 'NR>1 {sum+=$2; count++} END {print "평균 응답 시간: " sum/count " ms"}' "$RESULTS_DIR/$TIMESTAMP/comprehensive-load-test-results.jtl"
fi
}
# 스크립트 실행
main "$@"

파일 보기

@ -0,0 +1,190 @@
#!/bin/bash
# Query DB 서버에서 최적화된 실행 스크립트
# Rocky Linux 환경에 맞춰 조정됨
# Java 17 경로 명시적 지정
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 로그 디렉토리
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
echo "================================================"
echo "Vessel Batch Aggregation - Query Server Edition"
echo "Start Time: $(date)"
echo "================================================"
# 경로 확인
echo "Environment Check:"
echo "- App Home: $APP_HOME"
echo "- JAR File: $JAR_FILE"
echo "- Java Path: $JAVA_BIN"
echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
# JAR 파일 존재 확인
if [ ! -f "$JAR_FILE" ]; then
echo "ERROR: JAR file not found at $JAR_FILE"
exit 1
fi
# Java 실행 파일 확인
if [ ! -x "$JAVA_BIN" ]; then
echo "ERROR: Java not found or not executable at $JAVA_BIN"
exit 1
fi
# 서버 정보 확인
echo ""
echo "Server Info:"
echo "- Hostname: $(hostname)"
echo "- CPU Cores: $(nproc)"
echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
# 환경 변수 설정 (localhost 최적화)
export SPRING_PROFILES_ACTIVE=prod
# Query DB와 Batch Meta DB를 localhost로 오버라이드
export SPRING_DATASOURCE_QUERY_JDBC_URL="jdbc:postgresql://10.29.17.90:5432/mpcdb2?options=-csearch_path=signal,public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
export SPRING_DATASOURCE_BATCH_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
# 서버 CPU 코어 수에 따른 병렬 처리 조정
CPU_CORES=$(nproc)
export VESSEL_BATCH_PARTITION_SIZE=$((CPU_CORES * 2))
export VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS=$((CPU_CORES / 2))
echo ""
echo "Optimized Settings:"
echo "- Partition Size: $VESSEL_BATCH_PARTITION_SIZE"
echo "- Parallel Threads: $VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS"
echo "- Query DB: localhost (optimized)"
echo "- Batch Meta DB: localhost (optimized)"
# JVM 옵션 (서버 메모리에 맞게 조정)
TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
JVM_HEAP=$((TOTAL_MEM / 4)) # 전체 메모리의 25% 사용
# 최소 16GB, 최대 64GB로 제한
if [ $JVM_HEAP -lt 16 ]; then
JVM_HEAP=16
elif [ $JVM_HEAP -gt 64 ]; then
JVM_HEAP=64
fi
JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
-XX:+UseG1GC \
-XX:G1HeapRegionSize=32m \
-XX:MaxGCPauseMillis=200 \
-XX:InitiatingHeapOccupancyPercent=35 \
-XX:G1ReservePercent=15 \
-XX:+UseStringDeduplication \
-XX:+ParallelRefProcEnabled \
-XX:+ExplicitGCInvokesConcurrent \
-XX:ParallelGCThreads=$((CPU_CORES / 2)) \
-XX:ConcGCThreads=$((CPU_CORES / 4)) \
-XX:MaxMetaspaceSize=512m \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
-Xlog:gc*:file=$LOG_DIR/gc.log:time,uptime,level,tags:filecount=5,filesize=100M \
-Dfile.encoding=UTF-8 \
-Duser.timezone=Asia/Seoul \
-Djava.security.egd=file:/dev/./urandom \
-Dspring.profiles.active=prod"
echo "- JVM Heap Size: ${JVM_HEAP}GB"
# 기존 프로세스 확인 및 종료
echo ""
echo "Checking for existing process..."
PID=$(pgrep -f "$JAR_FILE")
if [ ! -z "$PID" ]; then
echo "Stopping existing process (PID: $PID)..."
kill -15 $PID
# 프로세스 종료 대기 (최대 30초)
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo "Process stopped successfully."
break
fi
if [ $i -eq 30 ]; then
echo "Force killing process..."
kill -9 $PID
fi
sleep 1
done
fi
# 작업 디렉토리로 이동
cd $APP_HOME
# 애플리케이션 실행 (nice로 우선순위 조정)
echo ""
echo "Starting application with reduced priority..."
echo "Command: nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
echo ""
# nohup으로 백그라운드 실행
nohup nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
> $LOG_DIR/app.log 2>&1 &
NEW_PID=$!
echo "Application started with PID: $NEW_PID"
# PID 파일 생성
echo $NEW_PID > $APP_HOME/vessel-batch.pid
# 시작 확인 (30초 대기)
echo "Waiting for application startup..."
STARTUP_SUCCESS=false
for i in {1..30}; do
if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
echo "✅ Application started successfully!"
STARTUP_SUCCESS=true
break
fi
echo -n "."
sleep 1
done
if [ "$STARTUP_SUCCESS" = false ]; then
echo ""
echo "⚠️ Application startup timeout. Check logs for errors."
echo "Log file: $LOG_DIR/app.log"
tail -20 $LOG_DIR/app.log
fi
echo ""
echo "================================================"
echo "Deployment Complete!"
echo "- PID: $NEW_PID"
echo "- PID File: $APP_HOME/vessel-batch.pid"
echo "- Log: $LOG_DIR/app.log"
echo "- Monitor: tail -f $LOG_DIR/app.log"
echo "================================================"
# 초기 상태 확인
sleep 5
echo ""
echo "Initial Status Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
# 리소스 사용량 표시
echo ""
echo "Resource Usage:"
ps aux | grep $NEW_PID | grep -v grep
# 빠른 명령어 안내
echo ""
echo "Useful Commands:"
echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-batch.pid)"
echo "- Logs: tail -f $LOG_DIR/app.log"
echo "- Status: curl http://localhost:8090/actuator/health"
echo "- Monitor: $APP_HOME/monitor-query-server.sh"

파일 보기

@ -0,0 +1,184 @@
#!/bin/bash
# Query 전용 서버 실행 스크립트 (10.29.17.90)
# 배치 Job 없이 조회 API만 제공
# Java 17 경로 명시적 지정
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 로그 디렉토리
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
echo "================================================"
echo "Vessel Query API Server - Query Only Mode"
echo "Start Time: $(date)"
echo "================================================"
# 경로 확인
echo "Environment Check:"
echo "- App Home: $APP_HOME"
echo "- JAR File: $JAR_FILE"
echo "- Java Path: $JAVA_BIN"
echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
# JAR 파일 존재 확인
if [ ! -f "$JAR_FILE" ]; then
echo "ERROR: JAR file not found at $JAR_FILE"
exit 1
fi
# Java 실행 파일 확인
if [ ! -x "$JAVA_BIN" ]; then
echo "ERROR: Java not found or not executable at $JAVA_BIN"
exit 1
fi
# 서버 정보 확인
echo ""
echo "Server Info:"
echo "- Hostname: $(hostname)"
echo "- CPU Cores: $(nproc)"
echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
# 환경 변수 설정 (query 프로파일 - 배치 비활성화!)
export SPRING_PROFILES_ACTIVE=query
echo ""
echo "Profile Settings:"
echo "- Active Profile: QUERY (Batch Jobs Disabled)"
echo "- Query DB: 10.29.17.90:5432/mpcdb2 (Local DB)"
echo "- Batch Jobs: DISABLED"
echo "- Scheduler: DISABLED"
# JVM 옵션 (서버 메모리에 맞게 조정)
TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
JVM_HEAP=$((TOTAL_MEM / 8)) # 전체 메모리의 12.5% 사용 (배치 없으므로 적게)
# 최소 4GB, 최대 16GB로 제한
if [ $JVM_HEAP -lt 4 ]; then
JVM_HEAP=4
elif [ $JVM_HEAP -gt 16 ]; then
JVM_HEAP=16
fi
CPU_CORES=$(nproc)
JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
-XX:+UseG1GC \
-XX:G1HeapRegionSize=32m \
-XX:MaxGCPauseMillis=200 \
-XX:InitiatingHeapOccupancyPercent=35 \
-XX:G1ReservePercent=15 \
-XX:+UseStringDeduplication \
-XX:+ParallelRefProcEnabled \
-XX:+ExplicitGCInvokesConcurrent \
-XX:ParallelGCThreads=$((CPU_CORES / 2)) \
-XX:ConcGCThreads=$((CPU_CORES / 4)) \
-XX:MaxMetaspaceSize=512m \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
-Xlog:gc*:file=$LOG_DIR/gc.log:time,uptime,level,tags:filecount=5,filesize=100M \
-Dfile.encoding=UTF-8 \
-Duser.timezone=Asia/Seoul \
-Djava.security.egd=file:/dev/./urandom \
-Dspring.profiles.active=query"
echo "- JVM Heap Size: ${JVM_HEAP}GB"
# 기존 프로세스 확인 및 종료
echo ""
echo "Checking for existing process..."
PID=$(pgrep -f "$JAR_FILE")
if [ ! -z "$PID" ]; then
echo "Stopping existing process (PID: $PID)..."
kill -15 $PID
# 프로세스 종료 대기 (최대 30초)
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo "Process stopped successfully."
break
fi
if [ $i -eq 30 ]; then
echo "Force killing process..."
kill -9 $PID
fi
sleep 1
done
fi
# 작업 디렉토리로 이동
cd $APP_HOME
# 애플리케이션 실행
echo ""
echo "Starting application in QUERY-ONLY mode..."
echo "Command: $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
echo ""
# nohup으로 백그라운드 실행
nohup $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
> $LOG_DIR/app.log 2>&1 &
NEW_PID=$!
echo "Application started with PID: $NEW_PID"
# PID 파일 생성
echo $NEW_PID > $APP_HOME/vessel-query.pid
# 시작 확인 (30초 대기)
echo "Waiting for application startup..."
STARTUP_SUCCESS=false
for i in {1..30}; do
if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
echo "✅ Application started successfully!"
STARTUP_SUCCESS=true
break
fi
echo -n "."
sleep 1
done
if [ "$STARTUP_SUCCESS" = false ]; then
echo ""
echo "⚠️ Application startup timeout. Check logs for errors."
echo "Log file: $LOG_DIR/app.log"
tail -20 $LOG_DIR/app.log
fi
echo ""
echo "================================================"
echo "Deployment Complete!"
echo "- Mode: QUERY ONLY (No Batch Jobs)"
echo "- PID: $NEW_PID"
echo "- PID File: $APP_HOME/vessel-query.pid"
echo "- Log: $LOG_DIR/app.log"
echo "- Monitor: tail -f $LOG_DIR/app.log"
echo "================================================"
# 초기 상태 확인
sleep 5
echo ""
echo "Initial Status Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
# 리소스 사용량 표시
echo ""
echo "Resource Usage:"
ps aux | grep $NEW_PID | grep -v grep
# 빠른 명령어 안내
echo ""
echo "Useful Commands:"
echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-query.pid)"
echo "- Logs: tail -f $LOG_DIR/app.log"
echo "- Status: curl http://localhost:8090/actuator/health"
echo "- API Test: curl http://localhost:8090/api/gis/areas"

40
scripts/server-logs.bat Normal file
파일 보기

@ -0,0 +1,40 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Server Log Viewer
REM ===============================================
setlocal
set SERVER_IP=10.26.252.48
set SERVER_USER=root
set SERVER_PATH=/devdata/apps/bridge-db-monitoring
echo ===============================================
echo Signal Batch Server Log Viewer
echo ===============================================
echo Server: %SERVER_IP%
echo Time: %date% %time%
echo.
if "%1"=="tail" (
echo Starting real-time log monitoring... (Ctrl+C to exit)
ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh logs"
) else if "%1"=="errors" (
echo Retrieving recent error logs...
ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh errors"
) else if "%1"=="stats" (
echo Retrieving performance statistics...
ssh %SERVER_USER%@%SERVER_IP% "cd %SERVER_PATH% && ./vessel-batch-control.sh stats"
) else (
echo Usage:
echo server-logs.bat - Show recent 50 lines
echo server-logs.bat tail - Real-time log monitoring
echo server-logs.bat errors - Show error logs only
echo server-logs.bat stats - Show performance statistics
echo.
echo Recent 50 lines of log:
ssh %SERVER_USER%@%SERVER_IP% "tail -50 %SERVER_PATH%/logs/app.log 2>/dev/null || echo 'Log file not available'"
)
endlocal

64
scripts/server-status.bat Normal file
파일 보기

@ -0,0 +1,64 @@
@echo off
chcp 65001 >nul
REM ===============================================
REM Signal Batch Server Status Checker
REM ===============================================
setlocal enabledelayedexpansion
REM Configuration
set "SERVER_IP=10.26.252.48"
set "SERVER_USER=root"
set "SERVER_PATH=/devdata/apps/bridge-db-monitoring"
echo ===============================================
echo Signal Batch Server Status
echo ===============================================
echo [INFO] Query Time: !date! !time!
echo [INFO] Target Server: !SERVER_IP!
REM 1. Server Connection Test
echo.
echo =============== Server Connection Test ===============
ssh !SERVER_USER!@!SERVER_IP! "echo 'Server connection OK'" 2>nul
set CONNECTION_RESULT=!ERRORLEVEL!
if !CONNECTION_RESULT! neq 0 (
echo [ERROR] Server connection failed
exit /b 1
)
echo [INFO] Server connection successful
REM 2. Application Status
echo.
echo =============== Application Status ===============
ssh !SERVER_USER!@!SERVER_IP! "cd !SERVER_PATH! && ./vessel-batch-control.sh status"
REM 3. Additional Status Information
echo.
echo =============== Additional Status Information ===============
REM Health Check
echo [INFO] Health Check:
ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/health --max-time 5 2>/dev/null | python -m json.tool 2>/dev/null || echo 'Health endpoint not available'"
echo.
REM Metrics Information
echo [INFO] Metrics Information:
ssh !SERVER_USER!@!SERVER_IP! "curl -s http://localhost:8090/actuator/metrics --max-time 5 2>/dev/null | head -20 || echo 'Metrics endpoint not available'"
echo.
REM Disk Usage
echo [INFO] Disk Usage:
ssh !SERVER_USER!@!SERVER_IP! "df -h !SERVER_PATH!"
echo.
REM Memory Usage
echo [INFO] Memory Usage:
ssh !SERVER_USER!@!SERVER_IP! "free -h"
echo.
REM Recent Log Check
echo [INFO] Recent Logs (last 10 lines):
ssh !SERVER_USER!@!SERVER_IP! "tail -10 !SERVER_PATH!/logs/app.log 2>/dev/null || echo 'Log file not available'"
endlocal

59
scripts/setup-ssh-key.bat Normal file
파일 보기

@ -0,0 +1,59 @@
@echo off
chcp 65001 >nul
echo ===============================================
echo SSH Key Setup for Server Deployment
echo ===============================================
set "SERVER_IP=10.26.252.51"
set "SERVER_USER=root"
echo [INFO] Setting up SSH key authentication for %SERVER_USER%@%SERVER_IP%
echo.
REM Check if SSH key exists
if not exist "%USERPROFILE%\.ssh\id_rsa.pub" (
echo [INFO] SSH key not found. Generating new SSH key...
ssh-keygen -t rsa -b 4096 -f "%USERPROFILE%\.ssh\id_rsa" -N ""
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to generate SSH key
pause
exit /b 1
)
echo [SUCCESS] SSH key generated
)
echo.
echo [INFO] Copying SSH key to server...
echo [INFO] You will be prompted for the server password
echo.
type "%USERPROFILE%\.ssh\id_rsa.pub" | ssh %SERVER_USER%@%SERVER_IP% "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys && echo '[SUCCESS] SSH key installed'"
if !ERRORLEVEL! neq 0 (
echo [ERROR] Failed to copy SSH key
echo.
echo Please ensure:
echo - Server is accessible at %SERVER_IP%
echo - You have the correct password for %SERVER_USER%
echo - SSH service is running on the server
pause
exit /b 1
)
echo.
echo ===============================================
echo [SUCCESS] SSH Key Setup Complete!
echo ===============================================
echo.
echo Testing connection...
ssh -o BatchMode=yes -o ConnectTimeout=10 %SERVER_USER%@%SERVER_IP% "echo '[SUCCESS] SSH key authentication working!'"
if !ERRORLEVEL! equ 0 (
echo.
echo You can now run deploy-only.bat without password
) else (
echo [WARN] Key authentication test failed
echo Please try running this script again
)
pause

파일 보기

@ -0,0 +1,67 @@
-- 실행 중인(STARTED) 배치 Job과 Step을 강제 종료
-- 주의: 실제로 실행 중인 프로세스를 종료하지는 않습니다.
-- DB 상태만 변경하므로, 애플리케이션을 먼저 중지한 후 사용하세요.
-- 1. 현재 실행 중인 Job 확인
SELECT
'=== RUNNING JOBS ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
STATUS,
(SELECT JOB_NAME FROM BATCH_JOB_INSTANCE WHERE JOB_INSTANCE_ID = bje.JOB_INSTANCE_ID) as JOB_NAME
FROM BATCH_JOB_EXECUTION bje
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING')
ORDER BY START_TIME DESC;
-- 2. 실행 중인 Step 확인
SELECT
'=== RUNNING STEPS ===' as status,
bse.STEP_EXECUTION_ID,
bse.JOB_EXECUTION_ID,
bse.STEP_NAME,
bse.STATUS,
bse.START_TIME
FROM BATCH_STEP_EXECUTION bse
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING')
ORDER BY START_TIME DESC;
-- 3. 실행 중인 Step을 STOPPED로 변경
UPDATE BATCH_STEP_EXECUTION
SET
STATUS = 'STOPPED',
EXIT_CODE = 'STOPPED',
EXIT_MESSAGE = 'Manually stopped - Original status: ' || STATUS,
END_TIME = CURRENT_TIMESTAMP,
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
-- 4. 실행 중인 Job을 STOPPED로 변경
UPDATE BATCH_JOB_EXECUTION
SET
STATUS = 'STOPPED',
EXIT_CODE = 'STOPPED',
EXIT_MESSAGE = 'Manually stopped - Original status: ' || STATUS,
END_TIME = CURRENT_TIMESTAMP,
LAST_UPDATED = CURRENT_TIMESTAMP
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
-- 5. 결과 확인
SELECT
'=== AFTER STOP ===' as status,
COUNT(*) as running_jobs
FROM BATCH_JOB_EXECUTION
WHERE STATUS IN ('STARTED', 'STARTING', 'STOPPING');
SELECT
'=== STOPPED JOBS ===' as status,
JOB_EXECUTION_ID,
JOB_INSTANCE_ID,
START_TIME,
END_TIME,
STATUS,
EXIT_CODE
FROM BATCH_JOB_EXECUTION
WHERE STATUS = 'STOPPED'
ORDER BY JOB_EXECUTION_ID DESC
LIMIT 10;

170
scripts/sync-nexus.sh Normal file
파일 보기

@ -0,0 +1,170 @@
#!/bin/bash
# =============================================================================
# sync-nexus.sh - 로컬 Maven 의존성을 Nexus에 동기화
#
# 사용법:
# ./scripts/sync-nexus.sh # 실제 업로드
# ./scripts/sync-nexus.sh --dry-run # 업로드 대상만 확인
# =============================================================================
set -eo pipefail
# --- SDKMAN 초기화 (set -u 전에 실행) ---
if [ -f "$HOME/.sdkman/bin/sdkman-init.sh" ]; then
source "$HOME/.sdkman/bin/sdkman-init.sh" 2>/dev/null || true
fi
# --- 설정 ---
NEXUS_URL="http://10.26.252.39:8081"
REPO_ID="mda-backend-repository"
NEXUS_USER="admin"
NEXUS_PASS="8932"
LOCAL_REPO="$HOME/.m2/repository"
# --- 옵션 파싱 ---
DRY_RUN=false
if [[ "${1:-}" == "--dry-run" ]]; then
DRY_RUN=true
echo "=== DRY RUN 모드 (업로드하지 않음) ==="
fi
# --- 카운터 ---
TOTAL=0
SKIPPED=0
UPLOADED=0
FAILED=0
# Nexus에 아티팩트 존재 여부 확인 (HTTP HEAD로 .pom 파일 체크)
check_exists() {
local group_path=$1
local artifact_id=$2
local version=$3
local pom_url="${NEXUS_URL}/repository/${REPO_ID}/${group_path}/${artifact_id}/${version}/${artifact_id}-${version}.pom"
local http_code
http_code=$(curl -s -o /dev/null -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" --connect-timeout 5 "$pom_url" < /dev/null)
[[ "$http_code" == "200" ]]
}
# 파일 업로드 (HTTP PUT)
upload_file() {
local file_path=$1
local remote_path=$2
local url="${NEXUS_URL}/repository/${REPO_ID}/${remote_path}"
if [ ! -f "$file_path" ]; then
return 1
fi
local http_code
http_code=$(curl -s -o /dev/null -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" --upload-file "$file_path" --connect-timeout 10 --max-time 120 "$url" < /dev/null)
[[ "$http_code" == "201" || "$http_code" == "200" ]]
}
# 아티팩트 업로드 (pom + jar + 기타)
upload_artifact() {
local group_id=$1
local artifact_id=$2
local version=$3
local packaging=$4
local group_path
group_path=$(echo "$group_id" | tr '.' '/')
local base_dir="${LOCAL_REPO}/${group_path}/${artifact_id}/${version}"
local base_name="${artifact_id}-${version}"
local remote_base="${group_path}/${artifact_id}/${version}"
local success=true
# POM 업로드 (필수)
local pom_file="${base_dir}/${base_name}.pom"
if [ -f "$pom_file" ]; then
if upload_file "$pom_file" "${remote_base}/${base_name}.pom"; then
:
else
echo " [FAIL] POM 업로드 실패"
success=false
fi
fi
# JAR 업로드 (pom 패키징이 아닌 경우)
if [[ "$packaging" != "pom" ]]; then
local jar_file="${base_dir}/${base_name}.${packaging}"
if [ -f "$jar_file" ]; then
if upload_file "$jar_file" "${remote_base}/${base_name}.${packaging}"; then
:
else
echo " [FAIL] ${packaging} 업로드 실패"
success=false
fi
fi
fi
$success
}
echo ""
echo "=== Nexus 동기화 시작 ==="
echo " Nexus: ${NEXUS_URL}/repository/${REPO_ID}"
echo " 로컬: ${LOCAL_REPO}"
echo ""
# Nexus 연결 확인
if ! curl -s -o /dev/null -w "" -u "${NEXUS_USER}:${NEXUS_PASS}" --connect-timeout 5 "${NEXUS_URL}/service/rest/v1/repositories" 2>/dev/null; then
echo "[ERROR] Nexus(${NEXUS_URL})에 연결할 수 없습니다."
exit 1
fi
echo "[OK] Nexus 연결 확인"
echo ""
# Maven dependency:list로 GAV 목록 추출
echo "의존성 목록 추출 중..."
DEP_LIST=$(mvn dependency:list -DoutputAbsoluteArtifactFilename=true 2>/dev/null | grep "^\[INFO\] " | sed 's/\[INFO\] //' | sed 's/ -- .*//')
echo ""
echo "--- 동기화 진행 ---"
while IFS= read -r line; do
# 형식: groupId:artifactId:packaging:version:scope:/path/to/file
IFS=':' read -r group_id artifact_id packaging version scope rest <<< "$line"
if [[ -z "$group_id" || -z "$artifact_id" || -z "$version" ]]; then
continue
fi
TOTAL=$((TOTAL + 1))
local_group_path=$(echo "$group_id" | tr '.' '/')
# Nexus 존재 여부 확인
if check_exists "$local_group_path" "$artifact_id" "$version"; then
SKIPPED=$((SKIPPED + 1))
continue
fi
# 신규 아티팩트 발견
echo "[NEW] ${group_id}:${artifact_id}:${version} (${packaging})"
if $DRY_RUN; then
UPLOADED=$((UPLOADED + 1))
else
if upload_artifact "$group_id" "$artifact_id" "$version" "$packaging"; then
echo " -> 업로드 완료"
UPLOADED=$((UPLOADED + 1))
else
echo " -> 업로드 실패"
FAILED=$((FAILED + 1))
fi
fi
done <<< "$DEP_LIST"
echo ""
echo "=== 동기화 완료 ==="
echo " 전체: ${TOTAL}"
echo " 스킵 (이미 존재): ${SKIPPED}"
if $DRY_RUN; then
echo " 업로드 대상: ${UPLOADED}"
else
echo " 업로드 성공: ${UPLOADED}"
echo " 업로드 실패: ${FAILED}"
fi
echo ""

파일 보기

@ -0,0 +1,135 @@
-- t_abnormal_tracks 테스트용 INSERT 쿼리
-- PostGIS ST_GeomFromText 함수 테스트
-- 1. 기본 테스트 (track_geom 컬럼 사용)
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd,
target_id,
time_bucket,
track_geom,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table
) VALUES (
'AIS', -- sig_src_cd
'TEST_VESSEL_001', -- target_id
'2025-10-10 12:00:00'::timestamp, -- time_bucket
ST_GeomFromText('LINESTRING M(126.0 37.0 1728547200, 126.1 37.1 1728547260)', 4326), -- track_geom (LineString M 타입)
'EXCESSIVE_SPEED', -- abnormal_type
'{"reason": "Speed exceeds 200 knots", "detected_speed": 250.5}'::jsonb, -- abnormal_reason
15.5, -- distance_nm
180.3, -- avg_speed
250.5, -- max_speed
10, -- point_count
'hourly' -- source_table
)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
DO UPDATE SET
track_geom = EXCLUDED.track_geom,
abnormal_type = EXCLUDED.abnormal_type,
abnormal_reason = EXCLUDED.abnormal_reason,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
detected_at = NOW();
-- 2. track_geom_v2 컬럼을 사용하는 경우
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd,
target_id,
time_bucket,
track_geom_v2,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table
) VALUES (
'LRIT', -- sig_src_cd
'TEST_VESSEL_002', -- target_id
'2025-10-10 13:00:00'::timestamp, -- time_bucket
ST_GeomFromText('LINESTRING M(127.0 38.0 1728550800, 127.2 38.2 1728550860, 127.4 38.4 1728550920)', 4326), -- track_geom_v2
'UNREALISTIC_DISTANCE', -- abnormal_type
'{"reason": "Distance too large for time interval", "distance_nm": 120.0, "time_interval_minutes": 5}'::jsonb, -- abnormal_reason
120.0, -- distance_nm
1440.0, -- avg_speed (120nm / 5min = 1440 knots)
1500.0, -- max_speed
3, -- point_count
'5min' -- source_table
)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
DO UPDATE SET
track_geom_v2 = EXCLUDED.track_geom_v2,
abnormal_type = EXCLUDED.abnormal_type,
abnormal_reason = EXCLUDED.abnormal_reason,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
detected_at = NOW();
-- 3. public 스키마를 명시적으로 지정한 버전
INSERT INTO signal.t_abnormal_tracks (
sig_src_cd,
target_id,
time_bucket,
track_geom,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table
) VALUES (
'VPASS', -- sig_src_cd
'TEST_VESSEL_003', -- target_id
'2025-10-10 14:00:00'::timestamp, -- time_bucket
public.ST_GeomFromText('LINESTRING M(128.0 36.0 1728554400, 128.1 36.1 1728554460)', 4326), -- public 스키마 명시
'SUDDEN_DIRECTION_CHANGE', -- abnormal_type
'{"reason": "Unrealistic turn angle", "angle_degrees": 175}'::jsonb, -- abnormal_reason
8.5, -- distance_nm
102.0, -- avg_speed
120.0, -- max_speed
2, -- point_count
'hourly' -- source_table
)
ON CONFLICT (sig_src_cd, target_id, time_bucket, source_table)
DO UPDATE SET
track_geom = EXCLUDED.track_geom,
abnormal_type = EXCLUDED.abnormal_type,
abnormal_reason = EXCLUDED.abnormal_reason,
distance_nm = EXCLUDED.distance_nm,
avg_speed = EXCLUDED.avg_speed,
max_speed = EXCLUDED.max_speed,
point_count = EXCLUDED.point_count,
detected_at = NOW();
-- 4. 검증 쿼리
SELECT
sig_src_cd,
target_id,
time_bucket,
abnormal_type,
abnormal_reason,
distance_nm,
avg_speed,
max_speed,
point_count,
source_table,
ST_AsText(track_geom) as track_geom_wkt,
ST_AsText(track_geom_v2) as track_geom_v2_wkt,
detected_at
FROM signal.t_abnormal_tracks
WHERE target_id LIKE 'TEST_VESSEL_%'
ORDER BY time_bucket DESC;
-- 5. 정리 (테스트 데이터 삭제)
-- DELETE FROM signal.t_abnormal_tracks WHERE target_id LIKE 'TEST_VESSEL_%';

파일 보기

@ -0,0 +1,496 @@
-- ========================================
-- 일별 집계 쿼리 검증 스크립트
-- CAST 및 타입 호환성 테스트
-- ========================================
-- 1. 임시 테스트 테이블 생성
DROP TABLE IF EXISTS test_vessel_tracks_hourly_for_daily CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_daily CASCADE;
CREATE TABLE test_vessel_tracks_hourly_for_daily (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
CREATE TABLE test_vessel_tracks_daily (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
-- 2. 샘플 데이터 삽입 (하루치 시간별 데이터)
-- 시나리오 1: 정상 이동 선박 (24시간 중 일부)
INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
(
'000001',
'TEST001',
'2025-01-07 00:00:00',
public.ST_GeomFromText('LINESTRING M(126.5 37.5 1736179200, 126.52 37.52 1736182800)', 4326),
5.5,
10.5,
12.0,
12,
'{"lat": 37.5, "lon": 126.5, "time": "2025-01-07 00:00:00", "sog": 10.5}'::jsonb,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 01:00:00", "sog": 11.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 01:00:00',
public.ST_GeomFromText('LINESTRING M(126.52 37.52 1736182800, 126.54 37.54 1736186400)', 4326),
6.0,
11.0,
13.0,
12,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 01:00:00", "sog": 11.0}'::jsonb,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 02:00:00", "sog": 12.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 02:00:00',
public.ST_GeomFromText('LINESTRING M(126.54 37.54 1736186400, 126.56 37.56 1736190000)', 4326),
5.8,
10.8,
12.5,
12,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 02:00:00", "sog": 10.8}'::jsonb,
'{"lat": 37.56, "lon": 126.56, "time": "2025-01-07 03:00:00", "sog": 11.5}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 03:00:00',
public.ST_GeomFromText('LINESTRING M(126.56 37.56 1736190000, 126.58 37.58 1736193600)', 4326),
6.2,
11.2,
13.5,
12,
'{"lat": 37.56, "lon": 126.56, "time": "2025-01-07 03:00:00", "sog": 11.2}'::jsonb,
'{"lat": 37.58, "lon": 126.58, "time": "2025-01-07 04:00:00", "sog": 12.5}'::jsonb
);
-- 시나리오 2: 정박 선박
INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
(
'000002',
'TEST002',
'2025-01-07 00:00:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736179200, 129.0 35.0 1736182800)', 4326),
0.0,
0.0,
0.5,
24,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 01:00:00", "sog": 0.0}'::jsonb
),
(
'000002',
'TEST002',
'2025-01-07 01:00:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736182800, 129.0 35.0 1736186400)', 4326),
0.0,
0.0,
0.3,
24,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 01:00:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 02:00:00", "sog": 0.0}'::jsonb
);
-- 시나리오 3: 단일 시간 데이터
INSERT INTO test_vessel_tracks_hourly_for_daily VALUES
(
'000003',
'TEST003',
'2025-01-07 00:00:00',
public.ST_GeomFromText('LINESTRING M(130.0 36.0 1736179200, 130.0 36.0 1736179200)', 4326),
0.0,
0.0,
0.0,
2,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 00:00:00", "sog": 0.0}'::jsonb
);
-- 3. 입력 데이터 검증
SELECT
'=== INPUT DATA VALIDATION ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_hourly_for_daily
ORDER BY sig_src_cd, target_id, time_bucket;
-- 4. 실제 DailyTrackProcessor SQL 실행 (CAST 사용)
-- Vessel: 000001_TEST001, Day: 2025-01-07
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== DAILY AGGREGATION RESULT (VESSEL 000001_TEST001) ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text
FROM calculated_tracks;
-- 5. INSERT 테스트 (CAST 호환성 검증)
INSERT INTO test_vessel_tracks_daily
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 6. 정박 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_daily
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000002'
AND target_id = 'TEST002'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 7. 단일 시간 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_daily
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_hourly_for_daily
WHERE sig_src_cd = '000003'
AND target_id = 'TEST003'
AND time_bucket >= CAST('2025-01-07 00:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-08 00:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 00:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 8. 최종 결과 검증
SELECT
'=== FINAL DAILY AGGREGATION RESULTS ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
distance_nm,
avg_speed,
max_speed,
point_count,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_daily
ORDER BY sig_src_cd, target_id;
-- 9. 타입 검증
SELECT
'=== DATA TYPE VALIDATION ===' as section,
pg_typeof(time_bucket) as time_bucket_type,
pg_typeof(track_geom) as track_geom_type,
pg_typeof(distance_nm) as distance_type,
pg_typeof(avg_speed) as avg_speed_type,
pg_typeof(max_speed) as max_speed_type,
pg_typeof(point_count) as point_count_type,
pg_typeof(start_position) as start_position_type
FROM test_vessel_tracks_daily
LIMIT 1;
-- 10. 시간 순서 검증 (M값이 증가하는지 확인)
SELECT
'=== TIME ORDERING VALIDATION ===' as section,
sig_src_cd,
target_id,
public.ST_M(public.ST_PointN(track_geom, 1)) as first_m_value,
public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) as last_m_value,
CASE
WHEN public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) >=
public.ST_M(public.ST_PointN(track_geom, 1))
THEN 'PASS'
ELSE 'FAIL'
END as time_order_check
FROM test_vessel_tracks_daily;
-- 11. 정리
DROP TABLE IF EXISTS test_vessel_tracks_hourly_for_daily CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_daily CASCADE;
-- ========================================
-- 테스트 완료
-- 모든 INSERT가 성공하고 타입 에러가 없으면 CAST 사용이 정상
-- ========================================

파일 보기

@ -0,0 +1,484 @@
-- ========================================
-- 시간별 집계 쿼리 검증 스크립트
-- CAST 및 타입 호환성 테스트
-- ========================================
-- 1. 임시 테스트 테이블 생성
DROP TABLE IF EXISTS test_vessel_tracks_5min CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_hourly CASCADE;
CREATE TABLE test_vessel_tracks_5min (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
CREATE TABLE test_vessel_tracks_hourly (
sig_src_cd VARCHAR(10),
target_id VARCHAR(20),
time_bucket TIMESTAMP,
track_geom geometry(LineStringM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
PRIMARY KEY (sig_src_cd, target_id, time_bucket)
);
-- 2. 샘플 데이터 삽입 (1시간치 5분 간격 데이터)
-- 시나리오 1: 정상 이동 선박
INSERT INTO test_vessel_tracks_5min VALUES
(
'000001',
'TEST001',
'2025-01-07 10:00:00',
public.ST_GeomFromText('LINESTRING M(126.5 37.5 1736215200, 126.51 37.51 1736215260, 126.52 37.52 1736215320)', 4326),
0.5,
10.5,
12.0,
3,
'{"lat": 37.5, "lon": 126.5, "time": "2025-01-07 10:00:00", "sog": 10.5}'::jsonb,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 10:02:00", "sog": 11.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 10:05:00',
public.ST_GeomFromText('LINESTRING M(126.52 37.52 1736215500, 126.53 37.53 1736215560, 126.54 37.54 1736215620)', 4326),
0.6,
11.0,
13.0,
3,
'{"lat": 37.52, "lon": 126.52, "time": "2025-01-07 10:05:00", "sog": 11.0}'::jsonb,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 10:07:00", "sog": 12.0}'::jsonb
),
(
'000001',
'TEST001',
'2025-01-07 10:10:00',
public.ST_GeomFromText('LINESTRING M(126.54 37.54 1736215800, 126.55 37.55 1736215860)', 4326),
0.4,
9.5,
11.0,
2,
'{"lat": 37.54, "lon": 126.54, "time": "2025-01-07 10:10:00", "sog": 9.5}'::jsonb,
'{"lat": 37.55, "lon": 126.55, "time": "2025-01-07 10:11:00", "sog": 10.0}'::jsonb
);
-- 시나리오 2: 정박 선박 (같은 좌표 반복)
INSERT INTO test_vessel_tracks_5min VALUES
(
'000002',
'TEST002',
'2025-01-07 10:00:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736215200, 129.0 35.0 1736215260)', 4326),
0.0,
0.0,
0.5,
2,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:01:00", "sog": 0.0}'::jsonb
),
(
'000002',
'TEST002',
'2025-01-07 10:05:00',
public.ST_GeomFromText('LINESTRING M(129.0 35.0 1736215500, 129.0 35.0 1736215560)', 4326),
0.0,
0.0,
0.3,
2,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:05:00", "sog": 0.0}'::jsonb,
'{"lat": 35.0, "lon": 129.0, "time": "2025-01-07 10:06:00", "sog": 0.0}'::jsonb
);
-- 시나리오 3: 단일 포인트 (중복 포인트로 유효한 LineString)
INSERT INTO test_vessel_tracks_5min VALUES
(
'000003',
'TEST003',
'2025-01-07 10:00:00',
public.ST_GeomFromText('LINESTRING M(130.0 36.0 1736215200, 130.0 36.0 1736215200)', 4326),
0.0,
0.0,
0.0,
1,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb,
'{"lat": 36.0, "lon": 130.0, "time": "2025-01-07 10:00:00", "sog": 0.0}'::jsonb
);
-- 3. 입력 데이터 검증
SELECT
'=== INPUT DATA VALIDATION ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_5min
ORDER BY sig_src_cd, target_id, time_bucket;
-- 4. 실제 HourlyTrackProcessor SQL 실행 (CAST 사용)
-- Vessel: 000001_TEST001, Hour: 2025-01-07 10:00:00
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== HOURLY AGGREGATION RESULT (VESSEL 000001_TEST001) ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text
FROM calculated_tracks;
-- 5. INSERT 테스트 (CAST 호환성 검증)
INSERT INTO test_vessel_tracks_hourly
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000001'
AND target_id = 'TEST001'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 6. 정박 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_hourly
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000002'
AND target_id = 'TEST002'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 7. 단일 포인트 선박 INSERT 테스트
INSERT INTO test_vessel_tracks_hourly
WITH ordered_tracks AS (
SELECT *
FROM test_vessel_tracks_5min
WHERE sig_src_cd = '000003'
AND target_id = 'TEST003'
AND time_bucket >= CAST('2025-01-07 10:00:00' AS timestamp)
AND time_bucket < CAST('2025-01-07 11:00:00' AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST('2025-01-07 10:00:00' AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom as track_geom,
total_distance as distance_nm,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points as point_count,
start_pos as start_position,
end_pos as end_position
FROM calculated_tracks;
-- 8. 최종 결과 검증
SELECT
'=== FINAL HOURLY AGGREGATION RESULTS ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
distance_nm,
avg_speed,
max_speed,
point_count,
public.ST_AsText(track_geom) as wkt
FROM test_vessel_tracks_hourly
ORDER BY sig_src_cd, target_id;
-- 9. 타입 검증
SELECT
'=== DATA TYPE VALIDATION ===' as section,
pg_typeof(time_bucket) as time_bucket_type,
pg_typeof(track_geom) as track_geom_type,
pg_typeof(distance_nm) as distance_type,
pg_typeof(avg_speed) as avg_speed_type,
pg_typeof(max_speed) as max_speed_type,
pg_typeof(point_count) as point_count_type,
pg_typeof(start_position) as start_position_type
FROM test_vessel_tracks_hourly
LIMIT 1;
-- 10. 시간 순서 검증 (M값이 증가하는지 확인)
SELECT
'=== TIME ORDERING VALIDATION ===' as section,
sig_src_cd,
target_id,
public.ST_M(public.ST_PointN(track_geom, 1)) as first_m_value,
public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) as last_m_value,
CASE
WHEN public.ST_M(public.ST_PointN(track_geom, public.ST_NPoints(track_geom))) >=
public.ST_M(public.ST_PointN(track_geom, 1))
THEN 'PASS'
ELSE 'FAIL'
END as time_order_check
FROM test_vessel_tracks_hourly;
-- 11. 정리
DROP TABLE IF EXISTS test_vessel_tracks_5min CASCADE;
DROP TABLE IF EXISTS test_vessel_tracks_hourly CASCADE;
-- ========================================
-- 테스트 완료
-- 모든 INSERT가 성공하고 타입 에러가 없으면 CAST 사용이 정상
-- ========================================

파일 보기

@ -0,0 +1,274 @@
-- ========================================
-- 실제 테이블 데이터로 CAST 호환성 테스트
-- ========================================
-- 1. 최근 5분 데이터 샘플 확인 (100개)
SELECT
'=== SAMPLE 5MIN DATA ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket DESC
LIMIT 100;
-- 2. 테스트할 선박 선정 (최근 1시간 내 5분 데이터가 있는 선박)
WITH recent_vessels AS (
SELECT
sig_src_cd,
target_id,
DATE_TRUNC('hour', time_bucket) as hour_bucket,
COUNT(*) as record_count,
MIN(time_bucket) as min_time,
MAX(time_bucket) as max_time
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id, DATE_TRUNC('hour', time_bucket)
HAVING COUNT(*) >= 2
ORDER BY hour_bucket DESC
LIMIT 10
)
SELECT
'=== TEST CANDIDATE VESSELS ===' as section,
sig_src_cd,
target_id,
hour_bucket,
record_count,
min_time,
max_time
FROM recent_vessels;
-- 3. 특정 선박의 5분 데이터 상세 확인
-- 아래 값들을 위 결과에서 선택해서 수정하세요
-- 예시: sig_src_cd = '000019', target_id = '111440547', hour_bucket = '2025-01-07 10:00:00'
\set test_sig_src_cd '000019'
\set test_target_id '111440547'
\set test_hour_start '''2025-01-07 10:00:00'''
\set test_hour_end '''2025-01-07 11:00:00'''
SELECT
'=== 5MIN DATA FOR TEST VESSEL ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(track_geom) as points,
public.ST_IsValid(track_geom) as is_valid,
public.ST_GeometryType(track_geom) as geom_type,
public.ST_AsText(track_geom) as wkt,
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)') as regex_v1,
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
) as regex_v2
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket;
-- 4. string_agg 결과 확인
SELECT
'=== STRING_AGG TEST ===' as section,
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords,
COUNT(*) as track_count
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
GROUP BY sig_src_cd, target_id;
-- 5. 병합된 WKT로 geometry 생성 테스트
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
)
SELECT
'=== WKT GENERATION TEST ===' as section,
sig_src_cd,
target_id,
'LINESTRING M(' || all_coords || ')' as full_wkt,
LENGTH(all_coords) as coords_length,
public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326) as test_geom,
public.ST_NPoints(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as merged_points,
public.ST_IsValid(public.ST_GeomFromText('LINESTRING M(' || all_coords || ')', 4326)) as is_valid
FROM merged_coords;
-- 6. 전체 시간별 집계 쿼리 실행 (SELECT만, INSERT 안함)
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
CAST(:test_hour_start AS timestamp) as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
CAST(end_pos->>'time' AS timestamp) - CAST(start_pos->>'time' AS timestamp)
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
'=== FULL HOURLY AGGREGATION TEST ===' as section,
sig_src_cd,
target_id,
time_bucket,
public.ST_NPoints(merged_geom) as merged_points,
public.ST_IsValid(merged_geom) as is_valid,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text,
time_diff_seconds
FROM calculated_tracks;
-- 7. M값 시간 순서 검증
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = :'test_sig_src_cd'
AND target_id = :'test_target_id'
AND time_bucket >= CAST(:test_hour_start AS timestamp)
AND time_bucket < CAST(:test_hour_end AS timestamp)
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
COALESCE(
substring(public.ST_AsText(track_geom) from 'LINESTRING\\s*M\\s*\\((.+)\\)'),
substring(public.ST_AsText(track_geom) from '\\((.+)\\)')
),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')', 4326) as merged_geom
FROM merged_coords mc
)
SELECT
'=== TIME ORDERING CHECK ===' as section,
sig_src_cd,
target_id,
public.ST_M(public.ST_PointN(merged_geom, 1)) as first_m_value,
to_timestamp(public.ST_M(public.ST_PointN(merged_geom, 1))) as first_time,
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) as last_m_value,
to_timestamp(public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom)))) as last_time,
CASE
WHEN public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) >=
public.ST_M(public.ST_PointN(merged_geom, 1))
THEN 'PASS'
ELSE 'FAIL'
END as time_order_check
FROM merged_tracks;
-- ========================================
-- 사용 방법:
-- 1. 먼저 쿼리 2번 실행해서 테스트할 선박 선택
-- 2. \set 변수 값 수정 (라인 48-51)
-- 3. 전체 스크립트 실행
-- 4. 각 섹션별 결과 확인
-- ========================================

파일 보기

@ -0,0 +1,215 @@
#!/bin/bash
# Vessel Batch 관리 스크립트
# 시작, 중지, 상태 확인 등 기본 관리 기능
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
PID_FILE="$APP_HOME/vessel-batch.pid"
LOG_DIR="$APP_HOME/logs"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 색상 코드
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# 함수: PID 확인
get_pid() {
if [ -f "$PID_FILE" ]; then
PID=$(cat $PID_FILE)
if kill -0 $PID 2>/dev/null; then
echo $PID
else
rm -f $PID_FILE
echo ""
fi
else
PID=$(pgrep -f "$JAR_FILE")
echo $PID
fi
}
# 함수: 상태 확인
status() {
PID=$(get_pid)
if [ ! -z "$PID" ]; then
echo -e "${GREEN}✓ Vessel Batch is running (PID: $PID)${NC}"
# 프로세스 정보
echo ""
ps aux | grep $PID | grep -v grep
# Health Check
echo ""
echo "Health Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not available"
# 처리 상태
echo ""
echo "Processing Status:"
if command -v psql >/dev/null 2>&1; then
psql -h localhost -U mda -d mdadb -c "
SELECT
NOW() - MAX(last_update) as processing_delay,
COUNT(*) as vessel_count
FROM signal.t_vessel_latest_position;" 2>/dev/null || echo "Unable to query database"
fi
return 0
else
echo -e "${RED}✗ Vessel Batch is not running${NC}"
return 1
fi
}
# 함수: 시작
start() {
PID=$(get_pid)
if [ ! -z "$PID" ]; then
echo -e "${YELLOW}Vessel Batch is already running (PID: $PID)${NC}"
return 1
fi
echo "Starting Vessel Batch..."
cd $APP_HOME
$APP_HOME/run-on-query-server-dev.sh
}
# 함수: 중지
stop() {
PID=$(get_pid)
if [ -z "$PID" ]; then
echo -e "${YELLOW}Vessel Batch is not running${NC}"
return 1
fi
echo "Stopping Vessel Batch (PID: $PID)..."
kill -15 $PID
# 종료 대기
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo -e "${GREEN}✓ Vessel Batch stopped successfully${NC}"
rm -f $PID_FILE
return 0
fi
echo -n "."
sleep 1
done
echo ""
echo -e "${RED}Process did not stop gracefully, force killing...${NC}"
kill -9 $PID
rm -f $PID_FILE
}
# 함수: 재시작
restart() {
echo "Restarting Vessel Batch..."
stop
sleep 3
start
}
# 함수: 로그 보기
logs() {
if [ ! -d "$LOG_DIR" ]; then
echo "Log directory not found: $LOG_DIR"
return 1
fi
echo "Available log files:"
ls -lh $LOG_DIR/*.log 2>/dev/null
echo ""
echo "Tailing app.log (Ctrl+C to exit)..."
tail -f $LOG_DIR/app.log
}
# 함수: 최근 에러 확인
errors() {
if [ ! -f "$LOG_DIR/app.log" ]; then
echo "Log file not found: $LOG_DIR/app.log"
return 1
fi
echo "Recent errors (last 50 lines with ERROR):"
grep "ERROR" $LOG_DIR/app.log | tail -50
echo ""
echo "Error summary:"
echo "Total errors: $(grep -c "ERROR" $LOG_DIR/app.log)"
echo "Errors today: $(grep "ERROR" $LOG_DIR/app.log | grep "$(date +%Y-%m-%d)" | wc -l)"
}
# 함수: 성능 통계
stats() {
echo "Performance Statistics"
echo "===================="
if [ -f "$LOG_DIR/resource-monitor.csv" ]; then
echo "Recent resource usage:"
tail -5 $LOG_DIR/resource-monitor.csv | column -t -s,
fi
echo ""
echo "Batch job statistics:"
if command -v psql >/dev/null 2>&1; then
psql -h localhost -U mda -d mdadb -c "
SELECT
job_name,
COUNT(*) as executions,
AVG(EXTRACT(EPOCH FROM (end_time - start_time))/60)::numeric(10,2) as avg_duration_min,
MAX(end_time) as last_execution
FROM batch_job_execution je
JOIN batch_job_instance ji ON je.job_instance_id = ji.job_instance_id
WHERE end_time > CURRENT_DATE - INTERVAL '7 days'
GROUP BY job_name;" 2>/dev/null || echo "Unable to query batch statistics"
fi
}
# 메인 로직
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
status)
status
;;
logs)
logs
;;
errors)
errors
;;
stats)
stats
;;
*)
echo "Usage: $0 {start|stop|restart|status|logs|errors|stats}"
echo ""
echo "Commands:"
echo " start - Start the Vessel Batch application"
echo " stop - Stop the Vessel Batch application"
echo " restart - Restart the Vessel Batch application"
echo " status - Check application status and health"
echo " logs - Tail application logs"
echo " errors - Show recent errors from logs"
echo " stats - Show performance statistics"
exit 1
;;
esac
exit $?

파일 보기

@ -0,0 +1,191 @@
#!/bin/bash
# Query DB 서버에서 최적화된 실행 스크립트 (PROD 프로파일)
# Rocky Linux 환경에 맞춰 조정됨
# Java 17 경로 명시적 지정
# 애플리케이션 경로
APP_HOME="/devdata/apps/bridge-db-monitoring"
JAR_FILE="$APP_HOME/vessel-batch-aggregation.jar"
# Java 17 경로
JAVA_HOME="/devdata/apps/jdk-17.0.8"
JAVA_BIN="$JAVA_HOME/bin/java"
# 로그 디렉토리
LOG_DIR="$APP_HOME/logs"
mkdir -p $LOG_DIR
echo "================================================"
echo "Vessel Batch Aggregation - PROD Profile"
echo "Start Time: $(date)"
echo "================================================"
# 경로 확인
echo "Environment Check:"
echo "- App Home: $APP_HOME"
echo "- JAR File: $JAR_FILE"
echo "- Java Path: $JAVA_BIN"
echo "- Java Version: $($JAVA_BIN -version 2>&1 | head -1)"
# JAR 파일 존재 확인
if [ ! -f "$JAR_FILE" ]; then
echo "ERROR: JAR file not found at $JAR_FILE"
exit 1
fi
# Java 실행 파일 확인
if [ ! -x "$JAVA_BIN" ]; then
echo "ERROR: Java not found or not executable at $JAVA_BIN"
exit 1
fi
# 서버 정보 확인
echo ""
echo "Server Info:"
echo "- Hostname: $(hostname)"
echo "- CPU Cores: $(nproc)"
echo "- Total Memory: $(free -h | grep Mem | awk '{print $2}')"
echo "- PostgreSQL Version: $(psql --version 2>/dev/null | head -1 || echo 'PostgreSQL client not in PATH')"
# 환경 변수 설정 (PROD 프로파일)
export SPRING_PROFILES_ACTIVE=prod
# Query DB와 Batch Meta DB를 localhost로 오버라이드
export SPRING_DATASOURCE_QUERY_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=signal&options=-csearch_path=signal,public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
export SPRING_DATASOURCE_BATCH_JDBC_URL="jdbc:postgresql://localhost:5432/mdadb?currentSchema=public&assumeMinServerVersion=12&reWriteBatchedInserts=true"
# 서버 CPU 코어 수에 따른 병렬 처리 조정
CPU_CORES=$(nproc)
export VESSEL_BATCH_PARTITION_SIZE=$((CPU_CORES * 2))
export VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS=$((CPU_CORES / 2))
echo ""
echo "Optimized Settings:"
echo "- Active Profile: PROD"
echo "- Partition Size: $VESSEL_BATCH_PARTITION_SIZE"
echo "- Parallel Threads: $VESSEL_BATCH_BULK_INSERT_PARALLEL_THREADS"
echo "- Query DB: localhost (optimized)"
echo "- Batch Meta DB: localhost (optimized)"
# JVM 옵션 (서버 메모리에 맞게 조정)
TOTAL_MEM=$(free -g | grep Mem | awk '{print $2}')
JVM_HEAP=$((TOTAL_MEM / 8)) # 전체 메모리의 25% 사용
# 최소 16GB, 최대 32GB로 제한
if [ $JVM_HEAP -lt 8 ]; then
JVM_HEAP=8
elif [ $JVM_HEAP -gt 16 ]; then
JVM_HEAP=16
fi
JAVA_OPTS="-Xms${JVM_HEAP}g -Xmx${JVM_HEAP}g \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-XX:+UseStringDeduplication \
-XX:+ParallelRefProcEnabled \
-XX:ParallelGCThreads=$((CPU_CORES / 2)) \
-XX:ConcGCThreads=$((CPU_CORES / 4)) \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$LOG_DIR/heapdump.hprof \
-Dfile.encoding=UTF-8 \
-Duser.timezone=Asia/Seoul \
-Djava.security.egd=file:/dev/./urandom \
-Dspring.profiles.active=prod"
echo "- JVM Heap Size: ${JVM_HEAP}GB"
# 기존 프로세스 확인 및 종료
echo ""
echo "Checking for existing process..."
PID=$(pgrep -f "$JAR_FILE")
if [ ! -z "$PID" ]; then
echo "Stopping existing process (PID: $PID)..."
kill -15 $PID
# 프로세스 종료 대기 (최대 30초)
for i in {1..30}; do
if ! kill -0 $PID 2>/dev/null; then
echo "Process stopped successfully."
break
fi
if [ $i -eq 30 ]; then
echo "Force killing process..."
kill -9 $PID
fi
sleep 1
done
fi
# 작업 디렉토리로 이동
cd $APP_HOME
# 애플리케이션 실행 (nice로 우선순위 조정)
echo ""
echo "Starting application with PROD profile..."
echo "Command: nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE"
echo ""
# nohup으로 백그라운드 실행
nohup nice -n 10 $JAVA_BIN $JAVA_OPTS -jar $JAR_FILE \
> $LOG_DIR/app.log 2>&1 &
NEW_PID=$!
echo "Application started with PID: $NEW_PID"
# PID 파일 생성
echo $NEW_PID > $APP_HOME/vessel-batch.pid
# 시작 확인 (30초 대기)
echo "Waiting for application startup..."
STARTUP_SUCCESS=false
for i in {1..30}; do
if grep -q "Started SignalBatchApplication" $LOG_DIR/app.log 2>/dev/null; then
echo "✅ Application started successfully!"
STARTUP_SUCCESS=true
break
fi
echo -n "."
sleep 1
done
if [ "$STARTUP_SUCCESS" = false ]; then
echo ""
echo "⚠️ Application startup timeout. Check logs for errors."
echo "Log file: $LOG_DIR/app.log"
tail -20 $LOG_DIR/app.log
fi
echo ""
echo "================================================"
echo "Deployment Complete!"
echo "- Profile: PROD"
echo "- PID: $NEW_PID"
echo "- PID File: $APP_HOME/vessel-batch.pid"
echo "- Log: $LOG_DIR/app.log"
echo "- Monitor: tail -f $LOG_DIR/app.log"
echo "================================================"
# 초기 상태 확인
sleep 5
echo ""
echo "Initial Status Check:"
curl -s http://localhost:8090/actuator/health 2>/dev/null | python -m json.tool || echo "Health endpoint not yet available"
# 활성 프로파일 확인
echo ""
echo "Active Profile Check:"
curl -s http://localhost:8090/actuator/env | grep -A 5 "activeProfiles" 2>/dev/null || echo "Env endpoint not yet available"
# 리소스 사용량 표시
echo ""
echo "Resource Usage:"
ps aux | grep $NEW_PID | grep -v grep
# 빠른 명령어 안내
echo ""
echo "Useful Commands:"
echo "- Stop: kill -15 \$(cat $APP_HOME/vessel-batch.pid)"
echo "- Logs: tail -f $LOG_DIR/app.log"
echo "- Status: curl http://localhost:8090/actuator/health"
echo "- Monitor: $APP_HOME/monitor-query-server.sh"

파일 보기

@ -0,0 +1,175 @@
#!/usr/bin/env python3
"""
WebSocket 부하 테스트 자동화 스크립트
"""
import asyncio
import json
import time
import statistics
from datetime import datetime, timedelta
import websockets
import stomper
from concurrent.futures import ThreadPoolExecutor
class WebSocketLoadTest:
def __init__(self, base_url="ws://10.26.252.48:8090/ws-tracks"):
self.base_url = base_url
self.results = []
self.active_connections = 0
async def single_client_test(self, client_id, duration_seconds=60):
"""단일 클라이언트 테스트"""
start_time = time.time()
messages_received = 0
bytes_received = 0
errors = 0
try:
async with websockets.connect(self.base_url) as websocket:
self.active_connections += 1
print(f"Client {client_id}: Connected")
# STOMP CONNECT
connect_frame = stomper.connect(host='/', accept_version='1.2')
await websocket.send(connect_frame)
# Subscribe to data channel
sub_frame = stomper.subscribe('/user/queue/tracks/data', client_id)
await websocket.send(sub_frame)
# Send query request
query_request = {
"startTime": (datetime.now() - timedelta(days=1)).isoformat(),
"endTime": datetime.now().isoformat(),
"viewport": {
"minLon": 124.0,
"maxLon": 132.0,
"minLat": 33.0,
"maxLat": 38.0
},
"filters": {
"minDistance": 10,
"minSpeed": 5
},
"chunkSize": 2000
}
send_frame = stomper.send('/app/tracks/query', json.dumps(query_request))
await websocket.send(send_frame)
# Receive messages
while time.time() - start_time < duration_seconds:
try:
message = await asyncio.wait_for(websocket.recv(), timeout=1.0)
messages_received += 1
bytes_received += len(message)
# Parse STOMP frame
frame = stomper.Frame()
frame.parse(message)
if frame.cmd == 'MESSAGE':
data = json.loads(frame.body)
if data.get('type') == 'complete':
print(f"Client {client_id}: Query completed")
break
except asyncio.TimeoutError:
continue
except Exception as e:
errors += 1
print(f"Client {client_id}: Error - {e}")
except Exception as e:
errors += 1
print(f"Client {client_id}: Connection error - {e}")
finally:
self.active_connections -= 1
# Calculate results
elapsed_time = time.time() - start_time
result = {
'client_id': client_id,
'duration': elapsed_time,
'messages': messages_received,
'bytes': bytes_received,
'errors': errors,
'msg_per_sec': messages_received / elapsed_time if elapsed_time > 0 else 0,
'mbps': (bytes_received / 1024 / 1024) / elapsed_time if elapsed_time > 0 else 0
}
self.results.append(result)
return result
async def run_load_test(self, num_clients=10, duration=60):
"""병렬 부하 테스트 실행"""
print(f"Starting load test with {num_clients} clients for {duration} seconds...")
tasks = []
for i in range(num_clients):
task = asyncio.create_task(self.single_client_test(i, duration))
tasks.append(task)
await asyncio.sleep(0.1) # Stagger connections
# Wait for all clients to complete
await asyncio.gather(*tasks)
# Print summary
self.print_summary()
def print_summary(self):
"""테스트 결과 요약 출력"""
print("\n" + "="*60)
print("LOAD TEST SUMMARY")
print("="*60)
total_messages = sum(r['messages'] for r in self.results)
total_bytes = sum(r['bytes'] for r in self.results)
total_errors = sum(r['errors'] for r in self.results)
avg_msg_per_sec = statistics.mean(r['msg_per_sec'] for r in self.results)
avg_mbps = statistics.mean(r['mbps'] for r in self.results)
print(f"Total Clients: {len(self.results)}")
print(f"Total Messages: {total_messages:,}")
print(f"Total Data: {total_bytes/1024/1024:.2f} MB")
print(f"Total Errors: {total_errors}")
print(f"Avg Messages/sec per client: {avg_msg_per_sec:.2f}")
print(f"Avg Throughput per client: {avg_mbps:.2f} MB/s")
print(f"Total Throughput: {avg_mbps * len(self.results):.2f} MB/s")
# Error rate
error_rate = (total_errors / len(self.results)) * 100 if self.results else 0
print(f"Error Rate: {error_rate:.2f}%")
# Success rate
successful_clients = sum(1 for r in self.results if r['errors'] == 0)
success_rate = (successful_clients / len(self.results)) * 100 if self.results else 0
print(f"Success Rate: {success_rate:.2f}%")
print("="*60)
async def main():
# Test scenarios
scenarios = [
{"clients": 10, "duration": 60, "name": "Light Load"},
{"clients": 50, "duration": 120, "name": "Medium Load"},
{"clients": 100, "duration": 180, "name": "Heavy Load"}
]
for scenario in scenarios:
print(f"\n{'='*60}")
print(f"Running scenario: {scenario['name']}")
print(f"{'='*60}")
tester = WebSocketLoadTest()
await tester.run_load_test(
num_clients=scenario['clients'],
duration=scenario['duration']
)
# Wait between scenarios
print(f"\nWaiting 30 seconds before next scenario...")
await asyncio.sleep(30)
if __name__ == "__main__":
asyncio.run(main())

파일 보기

@ -0,0 +1,584 @@
-- ============================================================
-- gc-signal-batch V2: SNP API 기반 스키마 (신규 생성)
-- 타겟 DB: snpdb (211.208.115.83), 스키마: signal
--
-- 핵심 변경:
-- sig_src_cd + target_id → mmsi VARCHAR(20) 단일 식별자
-- t_vessel_latest_position → t_ais_position (새 구조)
-- 신규: t_vessel_static (정적 정보 이력)
--
-- 실행 전 확인:
-- 1. PostGIS 확장이 설치되어 있는지 확인
-- 2. signal 스키마가 존재하는지 확인
-- 3. 파티션 테이블은 PartitionManager가 런타임에 자동 생성
-- ============================================================
-- 스키마 생성
CREATE SCHEMA IF NOT EXISTS signal;
-- PostGIS 확장 활성화
CREATE EXTENSION IF NOT EXISTS postgis;
-- ============================================================
-- 1. AIS 위치/정적 정보 (SNP API 전용, 신규)
-- ============================================================
-- t_ais_position: AIS 최신 위치 (MMSI별 1건 UPSERT)
-- 용도: 캐시 복원, 타 프로세스 최신 위치 조회, API 불가 환경 대응
-- 갱신: 5분 집계 Job에서 캐시 스냅샷 UPSERT
CREATE TABLE IF NOT EXISTS signal.t_ais_position (
mmsi VARCHAR(20) PRIMARY KEY,
imo BIGINT,
name VARCHAR(50),
callsign VARCHAR(20),
vessel_type VARCHAR(50),
extra_info VARCHAR(200),
lat DOUBLE PRECISION NOT NULL,
lon DOUBLE PRECISION NOT NULL,
geom GEOMETRY(POINT, 4326),
heading DOUBLE PRECISION,
sog DOUBLE PRECISION,
cog DOUBLE PRECISION,
rot INTEGER,
length INTEGER,
width INTEGER,
draught DOUBLE PRECISION,
destination VARCHAR(200),
eta TIMESTAMPTZ,
status VARCHAR(50),
message_timestamp TIMESTAMPTZ NOT NULL,
signal_kind_code VARCHAR(10),
class_type VARCHAR(1),
last_update TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_ais_position_geom ON signal.t_ais_position USING GIST (geom);
CREATE INDEX IF NOT EXISTS idx_ais_position_signal_kind ON signal.t_ais_position (signal_kind_code);
CREATE INDEX IF NOT EXISTS idx_ais_position_timestamp ON signal.t_ais_position (message_timestamp);
COMMENT ON TABLE signal.t_ais_position IS 'AIS 최신 위치 (MMSI별 1건, 5분 집계 Job에서 UPSERT)';
COMMENT ON COLUMN signal.t_ais_position.mmsi IS 'MMSI (VARCHAR — 문자 혼합 MMSI 장비 지원)';
COMMENT ON COLUMN signal.t_ais_position.signal_kind_code IS 'MDA 범례코드 (SignalKindCode.resolve 결과)';
-- t_vessel_static: 정적 정보 이력 (위변조/흘수 변경 추적)
-- 전략: COALESCE + CDC 하이브리드 (HourlyJob에서 저장)
-- 보존: 90일
CREATE TABLE IF NOT EXISTS signal.t_vessel_static (
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMPTZ NOT NULL,
imo BIGINT,
name VARCHAR(50),
callsign VARCHAR(20),
vessel_type VARCHAR(50),
extra_info VARCHAR(200),
length INTEGER,
width INTEGER,
draught DOUBLE PRECISION,
destination VARCHAR(200),
eta TIMESTAMPTZ,
status VARCHAR(50),
signal_kind_code VARCHAR(10),
class_type VARCHAR(1),
PRIMARY KEY (mmsi, time_bucket)
);
CREATE INDEX IF NOT EXISTS idx_vessel_static_mmsi ON signal.t_vessel_static (mmsi);
COMMENT ON TABLE signal.t_vessel_static IS '선박 정적 정보 이력 (시간별, COALESCE+CDC). 보존 90일';
-- ============================================================
-- 2. 핵심 항적 테이블 (5분/시간/일별 — 파티션)
-- ============================================================
-- t_vessel_tracks_5min: 5분 단위 항적 (일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_5min (
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_vessel_tracks_5min_pkey PRIMARY KEY (mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_5min_mmsi ON signal.t_vessel_tracks_5min (mmsi);
CREATE INDEX IF NOT EXISTS idx_tracks_5min_bucket ON signal.t_vessel_tracks_5min (time_bucket);
COMMENT ON TABLE signal.t_vessel_tracks_5min IS '선박 항적 5분 단위 집계';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.mmsi IS 'MMSI (VARCHAR)';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.track_geom IS 'LineStringM 형식 항적 (M값은 첫 포인트 기준 상대시간 초)';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.start_position IS '시작 위치 JSON {lat, lon, time, sog}';
COMMENT ON COLUMN signal.t_vessel_tracks_5min.end_position IS '종료 위치 JSON {lat, lon, time, sog}';
-- t_vessel_tracks_hourly: 시간별 항적 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_hourly (
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
start_position JSONB,
end_position JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_vessel_tracks_hourly_pkey PRIMARY KEY (mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_hourly_mmsi ON signal.t_vessel_tracks_hourly (mmsi);
CREATE INDEX IF NOT EXISTS idx_tracks_hourly_bucket ON signal.t_vessel_tracks_hourly (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_hourly_geom ON signal.t_vessel_tracks_hourly USING GIST (track_geom);
COMMENT ON TABLE signal.t_vessel_tracks_hourly IS '선박 항적 시간별 집계';
-- t_vessel_tracks_daily: 일별 항적 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_vessel_tracks_daily (
mmsi VARCHAR(20) NOT NULL,
time_bucket DATE NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
operating_hours NUMERIC(4,2),
port_visits JSONB,
start_position JSONB,
end_position JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_vessel_tracks_daily_pkey PRIMARY KEY (mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_daily_mmsi ON signal.t_vessel_tracks_daily (mmsi);
CREATE INDEX IF NOT EXISTS idx_tracks_daily_bucket ON signal.t_vessel_tracks_daily (time_bucket);
CREATE INDEX IF NOT EXISTS idx_tracks_daily_geom ON signal.t_vessel_tracks_daily USING GIST (track_geom);
COMMENT ON TABLE signal.t_vessel_tracks_daily IS '선박 항적 일별 집계';
-- ============================================================
-- 3. 해구(Grid) 관련 테이블 — 파티션
-- ============================================================
-- t_haegu_definitions: 대해구 정의 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_haegu_definitions (
haegu_no INTEGER NOT NULL,
min_lat DOUBLE PRECISION NOT NULL,
min_lon DOUBLE PRECISION NOT NULL,
max_lat DOUBLE PRECISION NOT NULL,
max_lon DOUBLE PRECISION NOT NULL,
center_lat DOUBLE PRECISION NOT NULL,
center_lon DOUBLE PRECISION NOT NULL,
geom GEOMETRY(MULTIPOLYGON, 4326) NOT NULL,
center_point GEOMETRY(POINT, 4326) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_haegu_definitions_pkey PRIMARY KEY (haegu_no)
);
CREATE INDEX IF NOT EXISTS idx_haegu_definitions_geom ON signal.t_haegu_definitions USING GIST (geom);
COMMENT ON TABLE signal.t_haegu_definitions IS '대해구 정의 정보';
-- t_grid_tiles: 그리드 타일 정의 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_grid_tiles (
tile_id VARCHAR(50) NOT NULL,
tile_level INTEGER NOT NULL,
haegu_no INTEGER NOT NULL,
sohaegu_no INTEGER,
min_lat DOUBLE PRECISION NOT NULL,
min_lon DOUBLE PRECISION NOT NULL,
max_lat DOUBLE PRECISION NOT NULL,
max_lon DOUBLE PRECISION NOT NULL,
tile_geom GEOMETRY(POLYGON, 4326) NOT NULL,
center_point GEOMETRY(POINT, 4326) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tiles_pkey PRIMARY KEY (tile_id)
);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_tile_geom ON signal.t_grid_tiles USING GIST (tile_geom);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_haegu ON signal.t_grid_tiles (haegu_no);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_level ON signal.t_grid_tiles (tile_level);
CREATE INDEX IF NOT EXISTS idx_grid_tiles_haegu_sohaegu ON signal.t_grid_tiles (haegu_no, sohaegu_no);
COMMENT ON TABLE signal.t_grid_tiles IS '그리드 타일 정의 (대해구/소해구)';
-- t_grid_vessel_tracks: 해구별 선박 항적 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_vessel_tracks (
haegu_no INTEGER NOT NULL,
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
point_count INTEGER,
entry_time TIMESTAMP,
exit_time TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_vessel_tracks_pkey PRIMARY KEY (haegu_no, mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_grid_vessel_tracks_mmsi_time ON signal.t_grid_vessel_tracks (mmsi, time_bucket DESC);
CREATE INDEX IF NOT EXISTS idx_grid_vessel_tracks_haegu_time ON signal.t_grid_vessel_tracks (haegu_no, time_bucket DESC);
COMMENT ON TABLE signal.t_grid_vessel_tracks IS '해구별 선박 항적 (5분 단위)';
-- t_grid_tracks_summary: 해구별 항적 요약 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary (
haegu_no INTEGER NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
traffic_density NUMERIC(10,4),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tracks_summary_pkey PRIMARY KEY (haegu_no, time_bucket)
) PARTITION BY RANGE (time_bucket);
COMMENT ON TABLE signal.t_grid_tracks_summary IS '해구별 5분 단위 항적 요약 통계';
COMMENT ON COLUMN signal.t_grid_tracks_summary.vessel_list IS '선박별 상세 정보 [{mmsi, distance_nm, avg_speed}]';
-- t_grid_tracks_summary_hourly: 해구별 시간별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary_hourly (
haegu_no INTEGER NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tracks_summary_hourly_pkey PRIMARY KEY (haegu_no, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_grid_tracks_summary_hourly_time ON signal.t_grid_tracks_summary_hourly (time_bucket DESC, haegu_no);
COMMENT ON TABLE signal.t_grid_tracks_summary_hourly IS '해구별 시간별 항적 요약 통계';
-- t_grid_tracks_summary_daily: 해구별 일별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_grid_tracks_summary_daily (
haegu_no INTEGER NOT NULL,
time_bucket DATE NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_grid_tracks_summary_daily_pkey PRIMARY KEY (haegu_no, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_grid_tracks_summary_daily_time ON signal.t_grid_tracks_summary_daily (time_bucket DESC, haegu_no);
COMMENT ON TABLE signal.t_grid_tracks_summary_daily IS '해구별 일일 항적 요약 통계';
-- ============================================================
-- 4. 영역(Area) 관련 테이블 — 파티션
-- ============================================================
-- t_areas: 사용자 정의 영역 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_areas (
area_id VARCHAR(50) NOT NULL,
area_name VARCHAR(100) NOT NULL,
area_type VARCHAR(20) NOT NULL,
area_geom GEOMETRY(MULTIPOLYGON, 4326) NOT NULL,
properties JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_areas_pkey PRIMARY KEY (area_id)
);
CREATE INDEX IF NOT EXISTS idx_t_areas_area_geom ON signal.t_areas USING GIST (area_geom);
COMMENT ON TABLE signal.t_areas IS '사용자 정의 영역 정보';
-- t_area_vessel_tracks: 영역별 선박 항적 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_vessel_tracks (
area_id VARCHAR(50) NOT NULL,
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
point_count INTEGER,
metrics JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_vessel_tracks_pkey PRIMARY KEY (area_id, mmsi, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_vessel_tracks_mmsi_time ON signal.t_area_vessel_tracks (mmsi, time_bucket DESC);
CREATE INDEX IF NOT EXISTS idx_area_vessel_tracks_area_time ON signal.t_area_vessel_tracks (area_id, time_bucket DESC);
COMMENT ON TABLE signal.t_area_vessel_tracks IS '영역별 선박 항적 (5분 단위)';
-- t_area_tracks_summary: 영역별 항적 요약 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary (
area_id VARCHAR(50) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
metrics_summary JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_tracks_summary_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
COMMENT ON TABLE signal.t_area_tracks_summary IS '영역별 5분 단위 항적 요약 통계';
COMMENT ON COLUMN signal.t_area_tracks_summary.vessel_list IS '선박별 상세 정보 [{mmsi, distance_nm, avg_speed}]';
-- t_area_tracks_summary_hourly: 영역별 시간별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary_hourly (
area_id VARCHAR(50) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
metrics_summary JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_tracks_summary_hourly_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_tracks_summary_hourly_time ON signal.t_area_tracks_summary_hourly (time_bucket DESC, area_id);
COMMENT ON TABLE signal.t_area_tracks_summary_hourly IS '영역별 시간별 항적 요약 통계';
-- t_area_tracks_summary_daily: 영역별 일별 요약 (월별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_tracks_summary_daily (
area_id VARCHAR(50) NOT NULL,
time_bucket DATE NOT NULL,
total_vessels INTEGER,
total_distance_nm NUMERIC(12,2),
avg_speed NUMERIC(6,2),
vessel_list JSONB,
metrics_summary JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_tracks_summary_daily_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_tracks_summary_daily_time ON signal.t_area_tracks_summary_daily (time_bucket DESC, area_id);
COMMENT ON TABLE signal.t_area_tracks_summary_daily IS '영역별 일일 항적 요약 통계';
-- t_area_statistics: 영역별 선박 통계 (5분, 일별 파티션)
CREATE TABLE IF NOT EXISTS signal.t_area_statistics (
area_id VARCHAR(50) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
vessel_count INTEGER DEFAULT 0,
in_count INTEGER DEFAULT 0,
out_count INTEGER DEFAULT 0,
transit_vessels JSONB,
stationary_vessels JSONB,
avg_sog NUMERIC(25,1),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT t_area_statistics_pkey PRIMARY KEY (area_id, time_bucket)
) PARTITION BY RANGE (time_bucket);
CREATE INDEX IF NOT EXISTS idx_area_stats_lookup ON signal.t_area_statistics (area_id, time_bucket DESC);
COMMENT ON TABLE signal.t_area_statistics IS '영역별 5분 단위 선박 통계';
-- ============================================================
-- 5. 비정상 항적 테이블 — 파티션
-- ============================================================
-- t_abnormal_tracks: 비정상 항적 (월별 파티션)
-- id는 GENERATED ALWAYS로 자동 생성
CREATE TABLE IF NOT EXISTS signal.t_abnormal_tracks (
id BIGINT GENERATED ALWAYS AS IDENTITY,
mmsi VARCHAR(20) NOT NULL,
time_bucket TIMESTAMP NOT NULL,
track_geom GEOMETRY(LINESTRINGM, 4326),
abnormal_type VARCHAR(50) NOT NULL,
abnormal_reason JSONB NOT NULL,
distance_nm NUMERIC(10,2),
avg_speed NUMERIC(6,2),
max_speed NUMERIC(6,2),
point_count INTEGER,
source_table VARCHAR(50) NOT NULL,
detected_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT t_abnormal_tracks_pkey PRIMARY KEY (id, time_bucket)
) PARTITION BY RANGE (time_bucket);
-- ON CONFLICT (mmsi, time_bucket, source_table) 지원
CREATE UNIQUE INDEX IF NOT EXISTS abnormal_tracks_uk ON signal.t_abnormal_tracks (mmsi, time_bucket, source_table);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_mmsi ON signal.t_abnormal_tracks (mmsi);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_time ON signal.t_abnormal_tracks (time_bucket);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_type ON signal.t_abnormal_tracks (abnormal_type);
CREATE INDEX IF NOT EXISTS idx_abnormal_tracks_geom ON signal.t_abnormal_tracks USING GIST (track_geom);
COMMENT ON TABLE signal.t_abnormal_tracks IS '비정상 선박 항적';
COMMENT ON COLUMN signal.t_abnormal_tracks.mmsi IS 'MMSI (VARCHAR)';
COMMENT ON COLUMN signal.t_abnormal_tracks.abnormal_type IS '비정상 유형 (excessive_speed, teleport, impossible_distance, excessive_avg_speed, gap_jump)';
COMMENT ON COLUMN signal.t_abnormal_tracks.source_table IS '검출 원본 테이블 (t_vessel_tracks_5min/hourly/daily)';
-- t_abnormal_track_stats: 비정상 항적 일별 통계 (일반 테이블)
CREATE TABLE IF NOT EXISTS signal.t_abnormal_track_stats (
stat_date DATE NOT NULL,
abnormal_type VARCHAR(50) NOT NULL,
vessel_count INTEGER NOT NULL,
track_count INTEGER NOT NULL,
total_points INTEGER,
avg_deviation NUMERIC(10,2),
max_deviation NUMERIC(10,2),
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
CONSTRAINT t_abnormal_track_stats_pkey PRIMARY KEY (stat_date, abnormal_type)
);
CREATE INDEX IF NOT EXISTS idx_abnormal_track_stats_date ON signal.t_abnormal_track_stats (stat_date);
COMMENT ON TABLE signal.t_abnormal_track_stats IS '비정상 항적 일별 통계';
-- ============================================================
-- 6. 타일 요약 테이블 — 파티션
-- ============================================================
-- t_tile_summary: 타일별 선박 요약 (5분, 일별 파티션)
-- ON CONFLICT (tile_id, time_bucket) 지원을 위해 UNIQUE 추가
CREATE TABLE IF NOT EXISTS signal.t_tile_summary (
tile_id VARCHAR(50) NOT NULL,
tile_level INTEGER NOT NULL,
time_bucket TIMESTAMP NOT NULL,
vessel_count INTEGER DEFAULT 0,
unique_vessels JSONB,
total_points BIGINT DEFAULT 0,
avg_sog NUMERIC(25,1),
max_sog NUMERIC(25,1),
vessel_density NUMERIC(10,6),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
haegu_no INTEGER,
sohaegu_no INTEGER,
CONSTRAINT t_tile_summary_pkey PRIMARY KEY (tile_id, time_bucket, tile_level)
) PARTITION BY RANGE (time_bucket);
-- ConcurrentUpdateManager에서 ON CONFLICT (tile_id, time_bucket) 사용
CREATE UNIQUE INDEX IF NOT EXISTS idx_tile_summary_tile_time_uk ON signal.t_tile_summary (tile_id, time_bucket);
CREATE INDEX IF NOT EXISTS idx_tile_summary_time ON signal.t_tile_summary (time_bucket DESC);
CREATE INDEX IF NOT EXISTS idx_tile_summary_vessel_count ON signal.t_tile_summary (vessel_count DESC);
CREATE INDEX IF NOT EXISTS idx_tile_summary_tile_level ON signal.t_tile_summary (tile_level);
COMMENT ON TABLE signal.t_tile_summary IS '타일별 5분 단위 선박 요약 통계';
COMMENT ON COLUMN signal.t_tile_summary.unique_vessels IS '고유 선박 목록 [{mmsi}]';
-- ============================================================
-- 7. 배치 성능 메트릭 (일반 테이블)
-- ============================================================
CREATE TABLE IF NOT EXISTS signal.t_batch_performance_metrics (
id SERIAL PRIMARY KEY,
job_name VARCHAR(100) NOT NULL,
execution_id BIGINT NOT NULL,
start_time TIMESTAMP NOT NULL,
end_time TIMESTAMP,
duration_seconds BIGINT,
total_read BIGINT,
total_write BIGINT,
throughput_per_sec NUMERIC(10,2),
status VARCHAR(20),
error_message TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_batch_metrics_job ON signal.t_batch_performance_metrics (job_name, start_time DESC);
CREATE INDEX IF NOT EXISTS idx_batch_metrics_status ON signal.t_batch_performance_metrics (status) WHERE status != 'COMPLETED';
COMMENT ON TABLE signal.t_batch_performance_metrics IS '배치 작업 성능 메트릭';
-- ============================================================
-- 8. 초기 파티션 생성 (수동 실행용)
-- PartitionManager가 런타임에 자동 생성하지만,
-- 최초 배포 시 수동으로 미리 생성할 수 있음.
-- ============================================================
-- 일별 파티션 생성 함수
CREATE OR REPLACE FUNCTION signal.create_daily_partition(
parent_table TEXT,
target_date DATE
) RETURNS VOID AS $$
DECLARE
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
partition_name := parent_table || '_' || to_char(target_date, 'YYMMDD');
start_date := target_date;
end_date := target_date + INTERVAL '1 day';
EXECUTE format(
'CREATE TABLE IF NOT EXISTS signal.%I PARTITION OF signal.%I FOR VALUES FROM (%L) TO (%L)',
partition_name, parent_table, start_date, end_date
);
END;
$$ LANGUAGE plpgsql;
-- 월별 파티션 생성 함수
CREATE OR REPLACE FUNCTION signal.create_monthly_partition(
parent_table TEXT,
target_date DATE
) RETURNS VOID AS $$
DECLARE
partition_name TEXT;
start_date DATE;
end_date DATE;
BEGIN
partition_name := parent_table || '_' || to_char(target_date, 'YYYY_MM');
start_date := date_trunc('month', target_date);
end_date := date_trunc('month', target_date) + INTERVAL '1 month';
EXECUTE format(
'CREATE TABLE IF NOT EXISTS signal.%I PARTITION OF signal.%I FOR VALUES FROM (%L) TO (%L)',
partition_name, parent_table, start_date, end_date
);
END;
$$ LANGUAGE plpgsql;
-- 현재 월 + 다음 달 파티션 일괄 생성
DO $$
DECLARE
today DATE := CURRENT_DATE;
day_offset INTEGER;
daily_tables TEXT[] := ARRAY[
't_vessel_tracks_5min',
't_grid_vessel_tracks',
't_grid_tracks_summary',
't_area_vessel_tracks',
't_area_tracks_summary',
't_tile_summary',
't_area_statistics'
];
monthly_tables TEXT[] := ARRAY[
't_vessel_tracks_hourly',
't_vessel_tracks_daily',
't_grid_tracks_summary_hourly',
't_grid_tracks_summary_daily',
't_area_tracks_summary_hourly',
't_area_tracks_summary_daily',
't_abnormal_tracks'
];
tbl TEXT;
BEGIN
-- 일별 파티션: 오늘부터 7일간
FOREACH tbl IN ARRAY daily_tables LOOP
FOR day_offset IN 0..6 LOOP
PERFORM signal.create_daily_partition(tbl, today + day_offset);
END LOOP;
END LOOP;
-- 월별 파티션: 이번 달 + 다음 달
FOREACH tbl IN ARRAY monthly_tables LOOP
PERFORM signal.create_monthly_partition(tbl, today);
PERFORM signal.create_monthly_partition(tbl, (today + INTERVAL '1 month')::DATE);
END LOOP;
RAISE NOTICE 'Initial partitions created successfully';
END;
$$;
-- ============================================================
-- 9. ANALYZE (통계 수집)
-- ============================================================
ANALYZE signal.t_ais_position;
ANALYZE signal.t_haegu_definitions;
ANALYZE signal.t_grid_tiles;
ANALYZE signal.t_areas;
ANALYZE signal.t_abnormal_track_stats;
ANALYZE signal.t_batch_performance_metrics;

파일 보기

@ -0,0 +1,68 @@
-- Unix timestamp 변환 함수
CREATE OR REPLACE FUNCTION signal.convert_to_unix_timestamp(
geom geometry,
base_time timestamp without time zone
) RETURNS geometry AS $$
DECLARE
wkt_text text;
points text[];
point_text text;
coords text[];
result_wkt text;
unix_base bigint;
relative_seconds bigint;
unix_time bigint;
i integer;
BEGIN
IF geom IS NULL THEN
RETURN NULL;
END IF;
-- Unix timestamp 기준값
unix_base := EXTRACT(EPOCH FROM base_time AT TIME ZONE 'Asia/Seoul')::bigint;
-- WKT 텍스트 추출
wkt_text := ST_AsText(geom);
-- LINESTRING M(...) 에서 좌표 부분만 추출
wkt_text := substring(wkt_text from 'LINESTRING M\((.*)\)');
-- 각 포인트를 배열로 분리
points := string_to_array(wkt_text, ', ');
-- 결과 WKT 시작
result_wkt := 'LINESTRING M(';
-- 각 포인트 처리
FOR i IN 1..array_length(points, 1) LOOP
-- 좌표를 공백으로 분리 (lon lat m)
coords := string_to_array(points[i], ' ');
-- M값(상대시간 초) 추출 및 Unix timestamp로 변환
relative_seconds := coords[3]::bigint;
unix_time := unix_base + relative_seconds;
-- 결과에 추가
IF i > 1 THEN
result_wkt := result_wkt || ', ';
END IF;
result_wkt := result_wkt || coords[1] || ' ' || coords[2] || ' ' || unix_time;
END LOOP;
result_wkt := result_wkt || ')';
-- geometry 타입으로 변환하여 반환
RETURN ST_GeomFromText(result_wkt, 4326);
END;
$$ LANGUAGE plpgsql IMMUTABLE PARALLEL SAFE;
-- 함수 테스트
SELECT
sig_src_cd,
target_id,
time_bucket,
ST_AsText(track_geom) as original,
ST_AsText(signal.convert_to_unix_timestamp(track_geom, time_bucket)) as converted
FROM signal.t_vessel_tracks_5min
WHERE track_geom IS NOT NULL
LIMIT 1;

42
sql/simple_update_v2.sql Normal file
파일 보기

@ -0,0 +1,42 @@
-- hourly 테이블 직접 UPDATE (함수 없이)
UPDATE signal.t_vessel_tracks_hourly AS h
SET track_geom_v2 = ST_GeomFromText(
REPLACE(
REPLACE(ST_AsText(track_geom), 'LINESTRING M(',
'LINESTRING M(' ||
CASE
WHEN ST_M(ST_PointN(track_geom, 1)) = 0
THEN EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::text
ELSE (EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::bigint + ST_M(ST_PointN(track_geom, 1)))::text
END || ' '
),
')',
EXTRACT(EPOCH FROM time_bucket + INTERVAL '9 hours')::text || ')'
),
4326
)
WHERE time_bucket = '2025-08-07 14:00:00'
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- daily 테이블 직접 UPDATE
UPDATE signal.t_vessel_tracks_daily AS d
SET track_geom_v2 = track_geom -- 임시로 복사 (정확한 변환은 나중에)
WHERE time_bucket = DATE_TRUNC('day', NOW())
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 결과 확인
SELECT
'hourly' as table_type,
COUNT(*) as total,
COUNT(track_geom_v2) as v2_filled
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket = '2025-08-07 14:00:00'
UNION ALL
SELECT
'daily' as table_type,
COUNT(*) as total,
COUNT(track_geom_v2) as v2_filled
FROM signal.t_vessel_tracks_daily
WHERE time_bucket = DATE_TRUNC('day', NOW());

40
sql/update_missing_v2.sql Normal file
파일 보기

@ -0,0 +1,40 @@
-- Unix timestamp 변환을 위한 간단한 UPDATE 쿼리
-- 5분 집계 테이블
UPDATE signal.t_vessel_tracks_5min
SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
WHERE time_bucket >= NOW() - INTERVAL '2 hours'
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 1시간 집계 테이블 (오후 2시 데이터)
UPDATE signal.t_vessel_tracks_hourly
SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
WHERE time_bucket = '2025-08-07 14:00:00'
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 일별 집계 테이블 (오늘 데이터)
UPDATE signal.t_vessel_tracks_daily
SET track_geom_v2 = signal.convert_to_unix_timestamp(track_geom, time_bucket)
WHERE time_bucket = DATE_TRUNC('day', NOW())
AND track_geom IS NOT NULL
AND track_geom_v2 IS NULL;
-- 결과 확인
SELECT
'hourly' as table_type,
COUNT(*) as total_records,
COUNT(track_geom) as v1_count,
COUNT(track_geom_v2) as v2_count
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket = '2025-08-07 14:00:00'
UNION ALL
SELECT
'daily' as table_type,
COUNT(*) as total_records,
COUNT(track_geom) as v1_count,
COUNT(track_geom_v2) as v2_count
FROM signal.t_vessel_tracks_daily
WHERE time_bucket = DATE_TRUNC('day', NOW());

파일 보기

@ -28,8 +28,8 @@ public class BatchCommandLineRunner implements CommandLineRunner {
private JobLauncher jobLauncher;
@Autowired
@Qualifier("vesselAggregationJob")
private Job vesselAggregationJob;
@Qualifier("vesselTrackAggregationJob")
private Job vesselTrackAggregationJob;
private final BatchUtils batchUtils;
@ -48,7 +48,7 @@ public class BatchCommandLineRunner implements CommandLineRunner {
log.info("Running batch job from {} to {}", startTime, endTime);
JobParameters params = batchUtils.createJobParameters(startTime, endTime);
JobExecution execution = jobLauncher.run(vesselAggregationJob, params);
JobExecution execution = jobLauncher.run(vesselTrackAggregationJob, params);
log.info("Batch job completed: {}", execution.getStatus());
} else {

파일 보기

@ -0,0 +1,144 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.transaction.PlatformTransactionManager;
import javax.sql.DataSource;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
/**
* 5분 집계 Job 편승: 캐시 스냅샷 t_ais_position UPSERT
*
* 용도:
* - 서비스 재시작 캐시 복원 (ChnPrmShipCacheWarmer )
* - 캐시 접근 불가 프로세스의 최신 위치 조회
* - API 연결 불가 환경 대응
*/
@Slf4j
@Configuration
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class AisPositionSyncStepConfig {
private final JobRepository jobRepository;
private final DataSource queryDataSource;
private final PlatformTransactionManager transactionManager;
private final AisTargetCacheManager cacheManager;
public AisPositionSyncStepConfig(
JobRepository jobRepository,
@Qualifier("queryDataSource") DataSource queryDataSource,
@Qualifier("queryTransactionManager") PlatformTransactionManager transactionManager,
AisTargetCacheManager cacheManager) {
this.jobRepository = jobRepository;
this.queryDataSource = queryDataSource;
this.transactionManager = transactionManager;
this.cacheManager = cacheManager;
}
@Bean
public Step aisPositionSyncStep() {
return new StepBuilder("aisPositionSyncStep", jobRepository)
.tasklet((contribution, chunkContext) -> {
Collection<AisTargetEntity> entities = cacheManager.getAllValues();
if (entities.isEmpty()) {
log.debug("캐시에 데이터 없음 — t_ais_position 동기화 스킵");
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
String sql = """
INSERT INTO signal.t_ais_position (
mmsi, imo, name, callsign, vessel_type, extra_info,
lat, lon, geom,
heading, sog, cog, rot,
length, width, draught,
destination, eta, status,
message_timestamp, signal_kind_code, class_type,
last_update
) VALUES (
?, ?, ?, ?, ?, ?,
?, ?, public.ST_SetSRID(public.ST_MakePoint(?, ?), 4326),
?, ?, ?, ?,
?, ?, ?,
?, ?, ?,
?, ?, ?,
NOW()
)
ON CONFLICT (mmsi) DO UPDATE SET
imo = EXCLUDED.imo,
name = EXCLUDED.name,
callsign = EXCLUDED.callsign,
vessel_type = EXCLUDED.vessel_type,
extra_info = EXCLUDED.extra_info,
lat = EXCLUDED.lat,
lon = EXCLUDED.lon,
geom = EXCLUDED.geom,
heading = EXCLUDED.heading,
sog = EXCLUDED.sog,
cog = EXCLUDED.cog,
rot = EXCLUDED.rot,
length = EXCLUDED.length,
width = EXCLUDED.width,
draught = EXCLUDED.draught,
destination = EXCLUDED.destination,
eta = EXCLUDED.eta,
status = EXCLUDED.status,
message_timestamp = EXCLUDED.message_timestamp,
signal_kind_code = EXCLUDED.signal_kind_code,
class_type = EXCLUDED.class_type,
last_update = NOW()
""";
List<Object[]> batchArgs = new ArrayList<>();
for (AisTargetEntity e : entities) {
if (e.getMmsi() == null || e.getLat() == null || e.getLon() == null) {
continue;
}
Timestamp msgTs = e.getMessageTimestamp() != null
? Timestamp.from(e.getMessageTimestamp().toInstant())
: null;
Timestamp etaTs = e.getEta() != null
? Timestamp.from(e.getEta().toInstant())
: null;
batchArgs.add(new Object[] {
e.getMmsi(), e.getImo(), e.getName(), e.getCallsign(),
e.getVesselType(), e.getExtraInfo(),
e.getLat(), e.getLon(),
e.getLon(), e.getLat(), // ST_MakePoint(lon, lat)
e.getHeading(), e.getSog(), e.getCog(), e.getRot(),
e.getLength(), e.getWidth(), e.getDraught(),
e.getDestination(), etaTs, e.getStatus(),
msgTs, e.getSignalKindCode(), e.getClassType()
});
}
if (!batchArgs.isEmpty()) {
int[] results = jdbcTemplate.batchUpdate(sql, batchArgs);
log.info("t_ais_position 동기화 완료: {} 건 UPSERT", results.length);
}
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}, transactionManager)
.build();
}
}

파일 보기

@ -0,0 +1,96 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.processor.AisTargetDataProcessor;
import gc.mda.signal_batch.batch.reader.AisTargetDataReader;
import gc.mda.signal_batch.batch.writer.AisTargetCacheWriter;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetDto;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobExecutionListener;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.job.builder.JobBuilder;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.web.reactive.function.client.WebClient;
/**
* AIS Target Import Job 설정
*
* 1분 실행: S&P AIS API DTO 변환 캐시 저장
* Chunk Size: 50,000 (API 호출에 ~33,000건)
*
* DB 저장 없음 캐시만 업데이트.
* t_ais_position UPSERT는 Phase 3의 5분 집계 Job에서 편승.
*/
@Slf4j
@Configuration
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class AisTargetImportJobConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager transactionManager;
private final AisTargetDataProcessor processor;
private final AisTargetCacheWriter writer;
private final WebClient aisApiWebClient;
@Value("${app.ais-api.since-seconds:60}")
private int sinceSeconds;
@Value("${app.ais-api.chunk-size:50000}")
private int chunkSize;
public AisTargetImportJobConfig(
JobRepository jobRepository,
@Qualifier("batchTransactionManager") PlatformTransactionManager transactionManager,
AisTargetDataProcessor processor,
AisTargetCacheWriter writer,
@Qualifier("aisApiWebClient") WebClient aisApiWebClient) {
this.jobRepository = jobRepository;
this.transactionManager = transactionManager;
this.processor = processor;
this.writer = writer;
this.aisApiWebClient = aisApiWebClient;
}
@Bean(name = "aisTargetImportStep")
public Step aisTargetImportStep() {
return new StepBuilder("aisTargetImportStep", jobRepository)
.<AisTargetDto, AisTargetEntity>chunk(chunkSize, transactionManager)
.reader(new AisTargetDataReader(aisApiWebClient, sinceSeconds))
.processor(processor)
.writer(writer)
.build();
}
@Bean(name = "aisTargetImportJob")
public Job aisTargetImportJob() {
return new JobBuilder("aisTargetImportJob", jobRepository)
.start(aisTargetImportStep())
.listener(new JobExecutionListener() {
@Override
public void beforeJob(JobExecution jobExecution) {
log.info("[aisTargetImportJob] Job 시작");
}
@Override
public void afterJob(JobExecution jobExecution) {
log.info("[aisTargetImportJob] Job 완료 - 상태: {}, 처리: {} 건",
jobExecution.getStatus(),
jobExecution.getStepExecutions().stream()
.mapToLong(se -> se.getWriteCount())
.sum());
}
})
.build();
}
}

파일 보기

@ -1,220 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.batch.processor.AccumulatingAreaProcessor;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.AreaStatistics;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import gc.mda.signal_batch.batch.reader.PartitionedReader;
import gc.mda.signal_batch.batch.reader.VesselDataReader;
import gc.mda.signal_batch.batch.writer.UpsertWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.StepExecutionListener;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemReader;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.task.TaskExecutor;
import org.springframework.transaction.PlatformTransactionManager;
import java.time.LocalDateTime;
import java.util.List;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class AreaStatisticsStepConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager queryTransactionManager;
private final VesselDataReader vesselDataReader;
private final AccumulatingAreaProcessor accumulatingAreaProcessor;
private final AreaStatisticsProcessor areaStatisticsProcessor;
private final UpsertWriter upsertWriter;
private final PartitionedReader partitionedReader;
private final ApplicationContext applicationContext;
@Value("${vessel.batch.area-statistics.chunk-size:1000}")
private int areaChunkSize;
@Value("${vessel.batch.area-statistics.batch-size:500}")
private int areaBatchSize;
@Qualifier("batchTaskExecutor")
private final TaskExecutor batchTaskExecutor;
@Qualifier("partitionTaskExecutor")
private final TaskExecutor partitionTaskExecutor;
@Bean
public Step aggregateAreaStatisticsStep() {
// InMemoryVesselDataReader를 ApplicationContext에서 가져옴
InMemoryVesselDataReader inMemoryReader = applicationContext.getBean(InMemoryVesselDataReader.class);
return new StepBuilder("aggregateAreaStatisticsStep", jobRepository)
.<VesselData, AreaStatistics>chunk(areaChunkSize, queryTransactionManager)
.reader(inMemoryReader) // 메모리 기반 Reader 사용
.processor(accumulatingAreaProcessor)
.writer(items -> {}) // writer, 실제 저장은 listener에서
.listener(areaStatisticsStepListener())
.faultTolerant()
.skipLimit(100)
.skip(Exception.class)
.build();
}
@Bean
public Step partitionedAreaStatisticsStep() {
return new StepBuilder("partitionedAreaStatisticsStep", jobRepository)
.partitioner("areaStatisticsPartitioner", partitionedReader.dayPartitioner(null))
.partitionHandler(areaStatisticsPartitionHandler())
.build();
}
@Bean
public TaskExecutorPartitionHandler areaStatisticsPartitionHandler() {
TaskExecutorPartitionHandler handler = new TaskExecutorPartitionHandler();
handler.setTaskExecutor(partitionTaskExecutor);
handler.setStep(areaStatisticsSlaveStep());
handler.setGridSize(24);
return handler;
}
@Bean
public Step areaStatisticsSlaveStep() {
return new StepBuilder("areaStatisticsSlaveStep", jobRepository)
.<List<VesselData>, List<AreaStatistics>>chunk(50, queryTransactionManager)
.reader(slaveAreaBatchVesselDataReader(null, null, null))
.processor(areaStatisticsProcessor.batchProcessor())
.writer(upsertWriter.areaStatisticsWriter())
.faultTolerant()
.skipLimit(100)
.skip(Exception.class)
.build();
}
@Bean
@StepScope
public ItemReader<VesselData> areaVesselDataReader(
@Value("#{jobParameters['startTime']}") String startTimeStr,
@Value("#{jobParameters['endTime']}") String endTimeStr) {
return new ItemReader<VesselData>() {
private ItemReader<VesselData> delegate;
private boolean initialized = false;
@Override
public VesselData read() throws Exception {
if (!initialized) {
LocalDateTime startTime = startTimeStr != null ? LocalDateTime.parse(startTimeStr) : null;
LocalDateTime endTime = endTimeStr != null ? LocalDateTime.parse(endTimeStr) : null;
// 기존 reader close
if (delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
} catch (Exception e) {
log.debug("Failed to close previous reader: {}", e.getMessage());
}
}
// 최신 위치만 사용
delegate = vesselDataReader.vesselLatestPositionReader(startTime, endTime, null);
((org.springframework.batch.item.ItemStream) delegate).open(
org.springframework.batch.core.scope.context.StepSynchronizationManager
.getContext().getStepExecution().getExecutionContext());
initialized = true;
}
VesselData data = delegate.read();
// Reader 종료 close
if (data == null && delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
delegate = null;
initialized = false;
} catch (Exception e) {
log.debug("Failed to close reader on completion: {}", e.getMessage());
}
}
return data;
}
};
}
@Bean
@StepScope
public ItemReader<List<VesselData>> slaveAreaBatchVesselDataReader(
@Value("#{stepExecutionContext['startTime']}") String startTime,
@Value("#{stepExecutionContext['endTime']}") String endTime,
@Value("#{stepExecutionContext['partition']}") String partition) {
return new ItemReader<List<VesselData>>() {
private ItemReader<VesselData> delegate = vesselDataReader.vesselDataPagingReader(
startTime != null ? LocalDateTime.parse(startTime) : null,
endTime != null ? LocalDateTime.parse(endTime) : null,
partition
);
@Override
public List<VesselData> read() throws Exception {
List<VesselData> batch = new java.util.ArrayList<>();
for (int i = 0; i < areaBatchSize; i++) {
VesselData item = delegate.read();
if (item == null) {
break;
}
batch.add(item);
}
return batch.isEmpty() ? null : batch;
}
};
}
@Bean
public StepExecutionListener areaStatisticsStepListener() {
return new StepExecutionListener() {
@Override
public ExitStatus afterStep(StepExecution stepExecution) {
// 누적된 데이터를 DB에 저장
@SuppressWarnings("unchecked")
List<AreaStatistics> statistics = (List<AreaStatistics>)
stepExecution.getExecutionContext().get("areaStatistics");
if (statistics != null && !statistics.isEmpty()) {
try {
upsertWriter.areaStatisticsWriter().write(
new Chunk<>(List.of(statistics))
);
log.info("Successfully wrote {} area statistics", statistics.size());
} catch (Exception e) {
log.error("Failed to write area statistics", e);
throw new RuntimeException(e);
}
}
return stepExecution.getExitStatus();
}
};
}
}

파일 보기

@ -1,6 +1,7 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.listener.JobCompletionListener;
import gc.mda.signal_batch.batch.reader.HourlyTrackCache;
import gc.mda.signal_batch.global.websocket.service.DailyTrackCacheManager;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
@ -17,9 +18,11 @@ import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import java.time.LocalDateTime;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@Profile("!query")
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class DailyAggregationJobConfig {
@ -28,6 +31,7 @@ public class DailyAggregationJobConfig {
private final DailyAggregationStepConfig dailyAggregationStepConfig;
private final JobCompletionListener jobCompletionListener;
private final DailyTrackCacheManager dailyTrackCacheManager;
private final HourlyTrackCache hourlyTrackCache;
@Bean
public Job dailyAggregationJob() {
@ -53,14 +57,26 @@ public class DailyAggregationJobConfig {
@Override
public void afterJob(JobExecution jobExecution) {
if (jobExecution.getStatus().isUnsuccessful()) {
log.warn("Daily aggregation job failed, skipping cache refresh");
log.warn("[CACHE-MONITOR] DailyJob FAILED — L2/L3 cleanup 건너뜀, status={}",
jobExecution.getStatus());
return;
}
try {
log.info("Daily aggregation job completed, refreshing daily track cache");
log.info("[CACHE-MONITOR] DailyJob 완료 → L3 refresh 시작, L2 size={}",
hourlyTrackCache.size());
dailyTrackCacheManager.refreshAfterDailyJob();
// hourly 캐시에서 어제 범위 제거
String startTime = jobExecution.getJobParameters().getString("startTime");
String endTime = jobExecution.getJobParameters().getString("endTime");
LocalDateTime start = LocalDateTime.parse(startTime);
LocalDateTime end = LocalDateTime.parse(endTime);
long l2Before = hourlyTrackCache.size();
hourlyTrackCache.removeRange(start, end);
log.info("[CACHE-MONITOR] DailyJob → L2 cleanup [{}, {}): L2 before={}, after={}, L2 stats=[{}]",
start, end, l2Before, hourlyTrackCache.size(), hourlyTrackCache.getStats());
} catch (Exception e) {
log.error("Failed to refresh daily track cache after job: {}", e.getMessage());
log.error("[CACHE-MONITOR] DailyJob 캐시 갱신/정리 실패: {}", e.getMessage());
}
}
};

파일 보기

@ -117,10 +117,10 @@ public class DailyAggregationStepConfig {
LocalDateTime end = LocalDateTime.parse(endTime);
String sql = """
SELECT DISTINCT sig_src_cd, target_id, date_trunc('day', time_bucket) as day_bucket
SELECT DISTINCT mmsi, date_trunc('day', time_bucket) as day_bucket
FROM signal.t_vessel_tracks_hourly
WHERE time_bucket >= ? AND time_bucket < ?
ORDER BY sig_src_cd, target_id, day_bucket
ORDER BY mmsi, day_bucket
""";
return new JdbcCursorItemReaderBuilder<VesselTrack.VesselKey>()
@ -132,8 +132,7 @@ public class DailyAggregationStepConfig {
ps.setTimestamp(2, java.sql.Timestamp.valueOf(end));
})
.rowMapper((rs, rowNum) -> new VesselTrack.VesselKey(
rs.getString("sig_src_cd"),
rs.getString("target_id"),
rs.getString("mmsi"),
rs.getObject("day_bucket", LocalDateTime.class)
))
.build();
@ -226,7 +225,7 @@ public class DailyAggregationStepConfig {
FROM (
SELECT haegu_no, jsonb_array_elements(vessel_list) as vessel_list,
total_distance_nm, avg_speed,
(vessel_list->>'sig_src_cd') || '_' || (vessel_list->>'target_id') as vessel_key
(vessel_list->>'mmsi') as vessel_key
FROM signal.t_grid_tracks_summary_hourly
WHERE haegu_no = ?
AND time_bucket >= ?
@ -313,7 +312,7 @@ public class DailyAggregationStepConfig {
FROM (
SELECT area_id, jsonb_array_elements(vessel_list) as vessel_list,
total_distance_nm, avg_speed,
(vessel_list->>'sig_src_cd') || '_' || (vessel_list->>'target_id') as vessel_key
(vessel_list->>'mmsi') as vessel_key
FROM signal.t_area_tracks_summary_hourly
WHERE area_id = ?
AND time_bucket >= ?

파일 보기

@ -1,9 +1,12 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.listener.JobCompletionListener;
import gc.mda.signal_batch.batch.reader.FiveMinTrackCache;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobExecutionListener;
import org.springframework.batch.core.JobParametersValidator;
import org.springframework.batch.core.job.DefaultJobParametersValidator;
import org.springframework.batch.core.job.builder.JobBuilder;
@ -14,16 +17,20 @@ import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import java.time.LocalDateTime;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@Profile("!query")
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class HourlyAggregationJobConfig {
private final JobRepository jobRepository;
private final HourlyAggregationStepConfig hourlyAggregationStepConfig;
private final VesselStaticStepConfig vesselStaticStepConfig;
private final JobCompletionListener jobCompletionListener;
private final FiveMinTrackCache fiveMinTrackCache;
@Bean
public Job hourlyAggregationJob() {
@ -31,12 +38,40 @@ public class HourlyAggregationJobConfig {
.incrementer(new RunIdIncrementer())
.validator(hourlyJobParametersValidator())
.listener(jobCompletionListener)
.listener(hourlyFiveMinCleanupListener())
.start(hourlyAggregationStepConfig.mergeHourlyTracksStep())
.next(hourlyAggregationStepConfig.gridHourlySummaryStep())
.next(hourlyAggregationStepConfig.areaHourlySummaryStep())
.next(vesselStaticStepConfig.vesselStaticSyncStep())
.build();
}
@Bean
public JobExecutionListener hourlyFiveMinCleanupListener() {
return new JobExecutionListener() {
@Override
public void afterJob(JobExecution jobExecution) {
if (jobExecution.getStatus().isUnsuccessful()) {
log.info("[CACHE-MONITOR] HourlyJob FAILED — L1 cleanup 건너뜀, status={}",
jobExecution.getStatus());
return;
}
try {
String startTime = jobExecution.getJobParameters().getString("startTime");
String endTime = jobExecution.getJobParameters().getString("endTime");
LocalDateTime start = LocalDateTime.parse(startTime);
LocalDateTime end = LocalDateTime.parse(endTime);
long l1Before = fiveMinTrackCache.size();
fiveMinTrackCache.removeRange(start, end);
log.info("[CACHE-MONITOR] HourlyJob 완료 → L1 cleanup [{}, {}): L1 before={}, after={}, L1 stats=[{}]",
start, end, l1Before, fiveMinTrackCache.size(), fiveMinTrackCache.getStats());
} catch (Exception e) {
log.error("[CACHE-MONITOR] L1 cleanup 실패: {}", e.getMessage());
}
}
};
}
@Bean
public JobParametersValidator hourlyJobParametersValidator() {
DefaultJobParametersValidator validator = new DefaultJobParametersValidator();

파일 보기

@ -1,14 +1,15 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.model.VesselTrack;
import gc.mda.signal_batch.batch.processor.HourlyTrackProcessor;
import gc.mda.signal_batch.batch.processor.HourlyTrackProcessorWithAbnormalDetection;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector.AbnormalDetectionResult;
import gc.mda.signal_batch.batch.processor.HourlyTrackMergeProcessor;
import gc.mda.signal_batch.batch.reader.CacheBasedHourlyTrackReader;
import gc.mda.signal_batch.batch.reader.FiveMinTrackCache;
import gc.mda.signal_batch.batch.reader.HourlyTrackCache;
import gc.mda.signal_batch.batch.writer.VesselTrackBulkWriter;
import gc.mda.signal_batch.batch.writer.AbnormalTrackWriter;
import gc.mda.signal_batch.batch.writer.CompositeTrackWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.StepScope;
@ -29,12 +30,11 @@ import org.springframework.transaction.PlatformTransactionManager;
import javax.sql.DataSource;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.List;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class HourlyAggregationStepConfig {
@ -44,6 +44,8 @@ public class HourlyAggregationStepConfig {
private final VesselTrackBulkWriter vesselTrackBulkWriter;
private final AbnormalTrackWriter abnormalTrackWriter;
private final AbnormalTrackDetector abnormalTrackDetector;
private final FiveMinTrackCache fiveMinTrackCache;
private final HourlyTrackCache hourlyTrackCache;
public HourlyAggregationStepConfig(
JobRepository jobRepository,
@ -51,42 +53,83 @@ public class HourlyAggregationStepConfig {
@Qualifier("queryTransactionManager") PlatformTransactionManager transactionManager,
VesselTrackBulkWriter vesselTrackBulkWriter,
AbnormalTrackWriter abnormalTrackWriter,
AbnormalTrackDetector abnormalTrackDetector) {
AbnormalTrackDetector abnormalTrackDetector,
FiveMinTrackCache fiveMinTrackCache,
HourlyTrackCache hourlyTrackCache) {
this.jobRepository = jobRepository;
this.queryDataSource = queryDataSource;
this.transactionManager = transactionManager;
this.vesselTrackBulkWriter = vesselTrackBulkWriter;
this.abnormalTrackWriter = abnormalTrackWriter;
this.abnormalTrackDetector = abnormalTrackDetector;
this.fiveMinTrackCache = fiveMinTrackCache;
this.hourlyTrackCache = hourlyTrackCache;
}
@Value("${vessel.batch.chunk-size:5000}")
private int chunkSize;
//
// Step 1: 5분 시간 병합 (인메모리 캐시 기반)
//
@Bean
public Step mergeHourlyTracksStep() {
// 비정상 궤적 검출은 항상 활성화 (설정 파일로 제어)
boolean detectAbnormal = true;
if (detectAbnormal) {
log.info("Building mergeHourlyTracksStep with abnormal detection enabled");
log.info("Building mergeHourlyTracksStep with cache-based in-memory merge");
HourlyTrackMergeProcessor processor = hourlyTrackMergeProcessor(null);
return new StepBuilder("mergeHourlyTracksStep", jobRepository)
.<VesselTrack.VesselKey, AbnormalDetectionResult>chunk(chunkSize, transactionManager)
.reader(hourlyVesselKeyReader(null, null))
.processor(hourlyTrackProcessorWithAbnormalDetection())
.<List<VesselTrack>, AbnormalDetectionResult>chunk(chunkSize, transactionManager)
.reader(cacheBasedHourlyTrackReader(null, null))
.processor(processor)
.writer(hourlyCompositeTrackWriter())
.build();
} else {
log.info("Building mergeHourlyTracksStep without abnormal detection");
return new StepBuilder("mergeHourlyTracksStep", jobRepository)
.<VesselTrack.VesselKey, VesselTrack>chunk(chunkSize, transactionManager)
.reader(hourlyVesselKeyReader(null, null))
.processor(hourlyTrackItemProcessor())
.writer(hourlyTrackWriter())
.listener(processor)
.build();
}
@Bean
@StepScope
public CacheBasedHourlyTrackReader cacheBasedHourlyTrackReader(
@Value("#{jobParameters['startTime']}") String startTime,
@Value("#{jobParameters['endTime']}") String endTime) {
LocalDateTime start = LocalDateTime.parse(startTime);
LocalDateTime end = LocalDateTime.parse(endTime);
return new CacheBasedHourlyTrackReader(
fiveMinTrackCache,
new JdbcTemplate(queryDataSource),
start, end);
}
@Bean
@StepScope
public HourlyTrackMergeProcessor hourlyTrackMergeProcessor(
@Value("#{jobParameters['timeBucket']}") String timeBucket) {
LocalDateTime hourBucket = LocalDateTime.parse(timeBucket)
.withMinute(0).withSecond(0).withNano(0);
return new HourlyTrackMergeProcessor(
abnormalTrackDetector,
new JdbcTemplate(queryDataSource),
hourBucket);
}
@Bean
public ItemWriter<AbnormalDetectionResult> hourlyCompositeTrackWriter() {
abnormalTrackWriter.setJobName("hourlyAggregationJob");
return new CompositeTrackWriter(
vesselTrackBulkWriter,
abnormalTrackWriter,
"hourly",
hourlyTrackCache
);
}
//
// Step 2: Grid Hourly Summary
//
@Bean
public Step gridHourlySummaryStep() {
return new StepBuilder("gridHourlySummaryStep", jobRepository)
@ -97,69 +140,6 @@ public class HourlyAggregationStepConfig {
.build();
}
@Bean
public Step areaHourlySummaryStep() {
return new StepBuilder("areaHourlySummaryStep", jobRepository)
.<String, HourlyAreaSummary>chunk(100, transactionManager)
.reader(hourlyAreaReader(null, null))
.processor(hourlyAreaProcessor())
.writer(hourlyAreaWriter(null, null))
.build();
}
@Bean
@StepScope
public JdbcCursorItemReader<VesselTrack.VesselKey> hourlyVesselKeyReader(
@Value("#{jobParameters['startTime']}") String startTime,
@Value("#{jobParameters['endTime']}") String endTime) {
LocalDateTime start = LocalDateTime.parse(startTime);
LocalDateTime end = LocalDateTime.parse(endTime);
String sql = """
SELECT DISTINCT sig_src_cd, target_id, date_trunc('hour', time_bucket) as hour_bucket
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= ? AND time_bucket < ?
ORDER BY sig_src_cd, target_id, hour_bucket
""";
return new JdbcCursorItemReaderBuilder<VesselTrack.VesselKey>()
.name("hourlyVesselKeyReader")
.dataSource(queryDataSource)
.sql(sql)
.preparedStatementSetter(ps -> {
ps.setTimestamp(1, java.sql.Timestamp.valueOf(start));
ps.setTimestamp(2, java.sql.Timestamp.valueOf(end));
})
.rowMapper((rs, rowNum) -> new VesselTrack.VesselKey(
rs.getString("sig_src_cd"),
rs.getString("target_id"),
rs.getObject("hour_bucket", LocalDateTime.class)
))
.build();
}
@Bean
public ItemProcessor<VesselTrack.VesselKey, VesselTrack> hourlyTrackItemProcessor() {
return new HourlyTrackProcessor(queryDataSource, new JdbcTemplate(queryDataSource));
}
@Bean
public ItemWriter<VesselTrack> hourlyTrackWriter() {
return items -> {
List<VesselTrack> tracks = new ArrayList<>();
for (VesselTrack track : items) {
if (track != null) {
tracks.add(track);
}
}
if (!tracks.isEmpty()) {
vesselTrackBulkWriter.writeHourlyTracks(tracks);
}
};
}
// Grid summary reader
@Bean
@StepScope
public JdbcCursorItemReader<Integer> hourlyGridReader(
@ -190,13 +170,10 @@ public class HourlyAggregationStepConfig {
@Bean
public ItemProcessor<Integer, HourlyGridSummary> hourlyGridProcessor() {
return new ItemProcessor<Integer, HourlyGridSummary>() {
@Override
public HourlyGridSummary process(Integer haeguNo) throws Exception {
return haeguNo -> {
HourlyGridSummary summary = new HourlyGridSummary();
summary.haeguNo = haeguNo;
return summary;
}
};
}
@ -222,12 +199,11 @@ public class HourlyAggregationStepConfig {
SELECT
haegu_no,
?::timestamp as time_bucket,
COUNT(DISTINCT sig_src_cd || '_' || target_id) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(DISTINCT jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list,
@ -250,7 +226,20 @@ public class HourlyAggregationStepConfig {
};
}
// Area summary reader
//
// Step 3: Area Hourly Summary
//
@Bean
public Step areaHourlySummaryStep() {
return new StepBuilder("areaHourlySummaryStep", jobRepository)
.<String, HourlyAreaSummary>chunk(100, transactionManager)
.reader(hourlyAreaReader(null, null))
.processor(hourlyAreaProcessor())
.writer(hourlyAreaWriter(null, null))
.build();
}
@Bean
@StepScope
public JdbcCursorItemReader<String> hourlyAreaReader(
@ -281,13 +270,10 @@ public class HourlyAggregationStepConfig {
@Bean
public ItemProcessor<String, HourlyAreaSummary> hourlyAreaProcessor() {
return new ItemProcessor<String, HourlyAreaSummary>() {
@Override
public HourlyAreaSummary process(String areaId) throws Exception {
return areaId -> {
HourlyAreaSummary summary = new HourlyAreaSummary();
summary.areaId = areaId;
return summary;
}
};
}
@ -313,12 +299,11 @@ public class HourlyAggregationStepConfig {
SELECT
area_id,
?::timestamp as time_bucket,
COUNT(DISTINCT sig_src_cd || '_' || target_id) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(DISTINCT jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list,
@ -341,27 +326,6 @@ public class HourlyAggregationStepConfig {
};
}
// 비정상 궤적 검출 관련 정의
@Bean
public ItemProcessor<VesselTrack.VesselKey, AbnormalDetectionResult> hourlyTrackProcessorWithAbnormalDetection() {
return new HourlyTrackProcessorWithAbnormalDetection(
hourlyTrackItemProcessor(),
abnormalTrackDetector,
queryDataSource
);
}
@Bean
public ItemWriter<AbnormalDetectionResult> hourlyCompositeTrackWriter() {
// Job 이름 직접 설정
abnormalTrackWriter.setJobName("hourlyAggregationJob");
return new CompositeTrackWriter(
vesselTrackBulkWriter,
abnormalTrackWriter,
"hourly"
);
}
// Summary 클래스들
public static class HourlyGridSummary {
public Integer haeguNo;

파일 보기

@ -1,178 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.domain.vessel.model.VesselLatestPosition;
import gc.mda.signal_batch.batch.processor.LatestPositionProcessor;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import gc.mda.signal_batch.batch.reader.PartitionedReader;
import gc.mda.signal_batch.batch.reader.VesselDataReader;
import gc.mda.signal_batch.batch.writer.UpsertWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.task.TaskExecutor;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.BackOffPolicy;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.transaction.PlatformTransactionManager;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.util.HashMap;
import java.util.Map;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class LatestPositionStepConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager queryTransactionManager;
private final LatestPositionProcessor latestPositionProcessor;
private final UpsertWriter upsertWriter;
private final PartitionedReader partitionedReader;
private final ApplicationContext applicationContext;
private final TaskExecutor batchTaskExecutor;
private final TaskExecutor partitionTaskExecutor;
public LatestPositionStepConfig(
JobRepository jobRepository,
@Qualifier("queryTransactionManager") PlatformTransactionManager queryTransactionManager,
LatestPositionProcessor latestPositionProcessor,
UpsertWriter upsertWriter,
PartitionedReader partitionedReader,
ApplicationContext applicationContext,
@Qualifier("batchTaskExecutor") TaskExecutor batchTaskExecutor,
@Qualifier("partitionTaskExecutor") TaskExecutor partitionTaskExecutor) {
this.jobRepository = jobRepository;
this.queryTransactionManager = queryTransactionManager;
this.latestPositionProcessor = latestPositionProcessor;
this.upsertWriter = upsertWriter;
this.partitionedReader = partitionedReader;
this.applicationContext = applicationContext;
this.batchTaskExecutor = batchTaskExecutor;
this.partitionTaskExecutor = partitionTaskExecutor;
}
@Bean
public Step updateLatestPositionStep() {
// InMemoryVesselDataReader를 ApplicationContext에서 가져옴
InMemoryVesselDataReader inMemoryReader = applicationContext.getBean(InMemoryVesselDataReader.class);
return new StepBuilder("updateLatestPositionStep", jobRepository)
.<VesselData, VesselLatestPosition>chunk(10000, queryTransactionManager)
.reader(inMemoryReader) // 메모리 기반 Reader 사용
.processor(latestPositionProcessor.processor())
.writer(upsertWriter.latestPositionWriter())
.faultTolerant()
.retryLimit(3)
.retry(org.springframework.dao.CannotAcquireLockException.class)
.skipLimit(1000)
.skip(org.springframework.dao.EmptyResultDataAccessException.class)
.skip(Exception.class)
.build();
}
// 메모리 기반 Reader 사용으로 제거
// @Bean
// @StepScope
// public ItemReader<VesselData> defaultVesselDataReader() { ... }
@Bean
public Step partitionedLatestPositionStep() {
return new StepBuilder("partitionedLatestPositionStep", jobRepository)
.partitioner("latestPositionPartitioner", dayPartitioner(null))
.partitionHandler(latestPositionPartitionHandler())
.build();
}
@Bean
public TaskExecutorPartitionHandler latestPositionPartitionHandler() {
TaskExecutorPartitionHandler handler = new TaskExecutorPartitionHandler();
handler.setTaskExecutor(partitionTaskExecutor);
handler.setStep(latestPositionSlaveStep());
handler.setGridSize(24);
return handler;
}
@Bean
public Step latestPositionSlaveStep() {
return new StepBuilder("latestPositionSlaveStep", jobRepository)
.<VesselData, VesselLatestPosition>chunk(3000, queryTransactionManager)
.reader(slaveVesselDataReader(null, null, null))
.processor(slaveLatestPositionProcessor())
.writer(upsertWriter.latestPositionWriter())
.faultTolerant()
.retryPolicy(retryPolicy())
.backOffPolicy(exponentialBackOffPolicy())
.skipLimit(50)
.skip(Exception.class)
.noRollback(org.springframework.dao.DuplicateKeyException.class)
.build();
}
@Bean
@StepScope
public ItemReader<VesselData> slaveVesselDataReader(
@Value("#{stepExecutionContext['startTime']}") String startTime,
@Value("#{stepExecutionContext['endTime']}") String endTime,
@Value("#{stepExecutionContext['partition']}") String partition) {
// ApplicationContext에서 VesselDataReader를 가져와서 사용
VesselDataReader reader = applicationContext.getBean(VesselDataReader.class);
return reader.vesselLatestPositionReader(
LocalDateTime.parse(startTime),
LocalDateTime.parse(endTime),
partition
);
}
@Bean
@StepScope
public ItemProcessor<VesselData, VesselLatestPosition> slaveLatestPositionProcessor() {
return latestPositionProcessor.processor();
}
@Bean
@StepScope
public org.springframework.batch.core.partition.support.Partitioner dayPartitioner(
@Value("#{jobParameters['processingDate']}") String processingDateStr) {
LocalDate processingDate = processingDateStr != null ? LocalDate.parse(processingDateStr) : null;
return partitionedReader.dayPartitioner(processingDate);
}
@Bean
public RetryPolicy retryPolicy() {
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(org.springframework.dao.CannotAcquireLockException.class, true);
retryableExceptions.put(org.springframework.dao.DataAccessException.class, true);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(3, retryableExceptions);
return retryPolicy;
}
@Bean
public BackOffPolicy exponentialBackOffPolicy() {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(1000); // 1초
backOffPolicy.setMaxInterval(10000); // 최대 10초
backOffPolicy.setMultiplier(2.0); // 2배씩 증가
return backOffPolicy;
}
}

파일 보기

@ -1,350 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.processor.AccumulatingTileProcessor;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.batch.processor.TileAggregationProcessor;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import gc.mda.signal_batch.batch.reader.PartitionedReader;
import gc.mda.signal_batch.batch.reader.VesselDataReader;
import gc.mda.signal_batch.batch.writer.OptimizedBulkInsertWriter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.batch.item.Chunk;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.support.CompositeItemProcessor;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.task.TaskExecutor;
import org.springframework.transaction.PlatformTransactionManager;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class TileAggregationStepConfig {
private final JobRepository jobRepository;
private final PlatformTransactionManager queryTransactionManager;
private final VesselDataReader vesselDataReader;
private final TileAggregationProcessor tileAggregationProcessor;
private final AccumulatingTileProcessor accumulatingTileProcessor;
private final OptimizedBulkInsertWriter optimizedBulkInsertWriter;
private final PartitionedReader partitionedReader;
private final ApplicationContext applicationContext;
private final TaskExecutor batchTaskExecutor;
private final TaskExecutor partitionTaskExecutor;
public TileAggregationStepConfig(
JobRepository jobRepository,
@Qualifier("queryTransactionManager") PlatformTransactionManager queryTransactionManager,
VesselDataReader vesselDataReader,
TileAggregationProcessor tileAggregationProcessor,
AccumulatingTileProcessor accumulatingTileProcessor,
OptimizedBulkInsertWriter optimizedBulkInsertWriter,
PartitionedReader partitionedReader,
ApplicationContext applicationContext,
@Qualifier("batchTaskExecutor") TaskExecutor batchTaskExecutor,
@Qualifier("partitionTaskExecutor") TaskExecutor partitionTaskExecutor) {
this.jobRepository = jobRepository;
this.queryTransactionManager = queryTransactionManager;
this.vesselDataReader = vesselDataReader;
this.tileAggregationProcessor = tileAggregationProcessor;
this.accumulatingTileProcessor = accumulatingTileProcessor;
this.optimizedBulkInsertWriter = optimizedBulkInsertWriter;
this.partitionedReader = partitionedReader;
this.applicationContext = applicationContext;
this.batchTaskExecutor = batchTaskExecutor;
this.partitionTaskExecutor = partitionTaskExecutor;
}
@Bean
public Step aggregateTileStatisticsStep() {
// InMemoryVesselDataReader를 ApplicationContext에서 가져옴
InMemoryVesselDataReader inMemoryReader = applicationContext.getBean(InMemoryVesselDataReader.class);
return new StepBuilder("aggregateTileStatisticsStep", jobRepository)
.<VesselData, TileStatistics>chunk(50000, queryTransactionManager)
.reader(inMemoryReader) // 메모리 기반 Reader 사용
.processor(accumulatingTileProcessor)
.writer(new AccumulatedTileWriter())
.listener(tileAggregationStepListener())
.faultTolerant()
.skipLimit(1000)
.skip(Exception.class)
.build();
}
@Bean
@StepScope
public ItemReader<VesselData> tileDataReader(
@Value("#{jobParameters['startTime']}") String startTimeStr,
@Value("#{jobParameters['endTime']}") String endTimeStr) {
return new ItemReader<VesselData>() {
private ItemReader<VesselData> delegate;
private boolean initialized = false;
@Override
public VesselData read() throws Exception {
if (!initialized) {
LocalDateTime startTime = startTimeStr != null ? LocalDateTime.parse(startTimeStr) : null;
LocalDateTime endTime = endTimeStr != null ? LocalDateTime.parse(endTimeStr) : null;
log.info("Creating tileDataReader with startTime: {}, endTime: {}", startTime, endTime);
// 기존 reader close
if (delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
} catch (Exception e) {
log.debug("Failed to close previous reader: {}", e.getMessage());
}
}
// 최신 위치만 사용
delegate = vesselDataReader.vesselLatestPositionReader(startTime, endTime, null);
((org.springframework.batch.item.ItemStream) delegate).open(
org.springframework.batch.core.scope.context.StepSynchronizationManager
.getContext().getStepExecution().getExecutionContext());
initialized = true;
}
VesselData data = delegate.read();
// Reader 종료 close
if (data == null && delegate != null) {
try {
((org.springframework.batch.item.ItemStream) delegate).close();
delegate = null;
initialized = false;
} catch (Exception e) {
log.debug("Failed to close reader on completion: {}", e.getMessage());
}
}
return data;
}
};
}
@Bean
public Step partitionedTileAggregationStep() {
return new StepBuilder("partitionedTileAggregationStep", jobRepository)
.partitioner("tileAggregationPartitioner", partitionedReader.dayPartitioner(null))
.partitionHandler(tileAggregationPartitionHandler())
.build();
}
@Bean
public TaskExecutorPartitionHandler tileAggregationPartitionHandler() {
TaskExecutorPartitionHandler handler = new TaskExecutorPartitionHandler();
handler.setTaskExecutor(partitionTaskExecutor);
handler.setStep(tileAggregationSlaveStep());
handler.setGridSize(24);
return handler;
}
@Bean
public Step tileAggregationSlaveStep() {
return new StepBuilder("tileAggregationSlaveStep", jobRepository)
.<List<VesselData>, List<TileStatistics>>chunk(50, queryTransactionManager)
.reader(slaveTileBatchVesselDataReader(null, null, null))
.processor(slaveTileProcessor(null, null))
.writer(optimizedBulkInsertWriter.tileStatisticsBulkWriter())
.faultTolerant()
.skipLimit(100)
.skip(Exception.class)
.build();
}
@Bean
@StepScope
public ItemReader<List<VesselData>> tileBatchVesselDataReader(
@Value("#{jobParameters['startTime']}") String startTimeStr,
@Value("#{jobParameters['endTime']}") String endTimeStr) {
LocalDateTime startTime = startTimeStr != null ? LocalDateTime.parse(startTimeStr) : null;
LocalDateTime endTime = endTimeStr != null ? LocalDateTime.parse(endTimeStr) : null;
return new ItemReader<List<VesselData>>() {
private ItemReader<VesselData> delegate = vesselDataReader.vesselDataPagingReader(startTime, endTime, null);
@Override
public List<VesselData> read() throws Exception {
List<VesselData> batch = new java.util.ArrayList<>();
for (int i = 0; i < 1000; i++) {
VesselData item = delegate.read();
if (item == null) {
break;
}
batch.add(item);
}
return batch.isEmpty() ? null : batch;
}
};
}
@Bean
@StepScope
public ItemReader<List<VesselData>> slaveTileBatchVesselDataReader(
@Value("#{stepExecutionContext['startTime']}") String startTime,
@Value("#{stepExecutionContext['endTime']}") String endTime,
@Value("#{stepExecutionContext['partition']}") String partition) {
return new ItemReader<List<VesselData>>() {
private ItemReader<VesselData> delegate = vesselDataReader.vesselDataPagingReader(
startTime != null ? LocalDateTime.parse(startTime) : null,
endTime != null ? LocalDateTime.parse(endTime) : null,
partition
);
@Override
public List<VesselData> read() throws Exception {
List<VesselData> batch = new java.util.ArrayList<>();
for (int i = 0; i < 1000; i++) {
VesselData item = delegate.read();
if (item == null) {
break;
}
batch.add(item);
}
return batch.isEmpty() ? null : batch;
}
};
}
@Bean
@StepScope
public ItemProcessor<List<VesselData>, List<TileStatistics>> slaveTileProcessor(
@Value("#{jobParameters['tileLevel']}") Integer tileLevel,
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
// 여러 레벨 처리를 위한 복합 프로세서
if (tileLevel == null) {
CompositeItemProcessor<List<VesselData>, List<TileStatistics>> compositeProcessor =
new CompositeItemProcessor<>();
compositeProcessor.setDelegates(Arrays.asList(
tileAggregationProcessor.batchProcessor(0, bucketMinutes),
tileAggregationProcessor.batchProcessor(1, bucketMinutes),
tileAggregationProcessor.batchProcessor(2, bucketMinutes)
));
return compositeProcessor;
} else {
return tileAggregationProcessor.batchProcessor(tileLevel, bucketMinutes);
}
}
@Bean
@StepScope
public ItemProcessor<VesselData, List<TileStatistics>> batchTileProcessor(
@Value("#{jobParameters['tileLevel']}") Integer tileLevel,
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int level = (tileLevel != null) ? tileLevel : 1;
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
return new ItemProcessor<VesselData, List<TileStatistics>>() {
private final List<VesselData> buffer = new ArrayList<>(1000);
@Override
public List<TileStatistics> process(VesselData item) throws Exception {
if (item == null || !item.isValidPosition()) {
return null;
}
buffer.add(item);
// 버퍼가 차면 처리
if (buffer.size() >= 1000) {
List<TileStatistics> result = tileAggregationProcessor
.batchProcessor(level, bucketMinutes)
.process(new ArrayList<>(buffer));
buffer.clear();
return result;
}
return null;
}
};
}
/**
* 누적된 결과를 번에 처리하는 Writer
*/
private class AccumulatedTileWriter implements ItemWriter<TileStatistics> {
@Override
public void write(Chunk<? extends TileStatistics> chunk) throws Exception {
// 대부분의 아이템은 null일 것임 (processor에서 null 반환)
// 실제 데이터는 Step 종료 처리됨
log.debug("AccumulatedTileWriter called with {} items", chunk.size());
}
}
/**
* Step 종료 누적된 데이터를 처리하는 리스너
*/
@Bean
@StepScope
public org.springframework.batch.core.StepExecutionListener tileAggregationStepListener() {
return new org.springframework.batch.core.StepExecutionListener() {
@Override
public void beforeStep(org.springframework.batch.core.StepExecution stepExecution) {
// beforeStep에서는 특별한 처리 없음
}
@Override
public org.springframework.batch.core.ExitStatus afterStep(org.springframework.batch.core.StepExecution stepExecution) {
log.info("[TileAggregationStepListener] afterStep called");
try {
// AccumulatingTileProcessor에서 직접 결과 가져오기
List<TileStatistics> accumulatedTiles = accumulatingTileProcessor.getAccumulatedResults();
log.info("[TileAggregationStepListener] Retrieved {} tiles from processor",
accumulatedTiles != null ? accumulatedTiles.size() : 0);
if (accumulatedTiles != null && !accumulatedTiles.isEmpty()) {
log.info("Writing {} accumulated tiles to database", accumulatedTiles.size());
// Bulk Writer를 사용하여 번에 저장
ItemWriter<List<TileStatistics>> writer = optimizedBulkInsertWriter.tileStatisticsBulkWriter();
Chunk<List<TileStatistics>> chunk = new Chunk<>();
chunk.add(accumulatedTiles);
writer.write(chunk);
log.info("Successfully wrote all accumulated tiles");
stepExecution.setWriteCount(accumulatedTiles.size());
} else {
log.warn("[TileAggregationStepListener] No tiles to write!");
}
return stepExecution.getExitStatus();
} catch (Exception e) {
log.error("Failed to write accumulated tiles", e);
return org.springframework.batch.core.ExitStatus.FAILED;
}
}
};
}
}

파일 보기

@ -1,78 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.global.util.SharedDataJobListener;
import gc.mda.signal_batch.global.util.VesselDataHolder;
import gc.mda.signal_batch.batch.listener.JobCompletionListener;
import gc.mda.signal_batch.batch.listener.PerformanceOptimizationListener;
import gc.mda.signal_batch.batch.reader.InMemoryVesselDataReader;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParametersValidator;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.job.DefaultJobParametersValidator;
import org.springframework.batch.core.job.builder.JobBuilder;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
@Slf4j
@Configuration
@Profile("!query") // query 프로파일에서는 배치 작업 비활성화
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class VesselAggregationJobConfig {
private final JobRepository jobRepository;
private final LatestPositionStepConfig latestPositionStepConfig;
private final TileAggregationStepConfig tileAggregationStepConfig;
private final AreaStatisticsStepConfig areaStatisticsStepConfig;
private final JobCompletionListener jobCompletionListener;
private final SharedDataJobListener sharedDataJobListener;
private final VesselDataHolder vesselDataHolder;
private final PerformanceOptimizationListener performanceOptimizationListener;
@Bean
public Job vesselAggregationJob() {
return new JobBuilder("vesselAggregationJob", jobRepository)
.incrementer(new RunIdIncrementer())
.validator(jobParametersValidator())
.listener(jobCompletionListener)
.listener(sharedDataJobListener) // 데이터 로드 리스너 추가
.listener(performanceOptimizationListener) // 성능 최적화 리스너 추가
.start(latestPositionStepConfig.updateLatestPositionStep())
.next(tileAggregationStepConfig.aggregateTileStatisticsStep())
.next(areaStatisticsStepConfig.aggregateAreaStatisticsStep())
.build();
}
@Bean
@StepScope
public InMemoryVesselDataReader inMemoryVesselDataReader() {
return new InMemoryVesselDataReader(vesselDataHolder);
}
@Bean
public Job vesselDailyPositionJob() {
return new JobBuilder("vesselDailyPositionJob", jobRepository)
.incrementer(new RunIdIncrementer())
.listener(jobCompletionListener)
.start(latestPositionStepConfig.partitionedLatestPositionStep())
.next(tileAggregationStepConfig.partitionedTileAggregationStep())
.next(areaStatisticsStepConfig.partitionedAreaStatisticsStep())
.build();
}
@Bean
public JobParametersValidator jobParametersValidator() {
DefaultJobParametersValidator validator = new DefaultJobParametersValidator();
validator.setRequiredKeys(new String[]{"startTime", "endTime"});
validator.setOptionalKeys(new String[]{"executionTime", "processingDate",
"tileLevel", "partitionCount"});
return validator;
}
}

파일 보기

@ -29,10 +29,6 @@ public class VesselBatchScheduler {
@Qualifier("asyncJobLauncher")
private JobLauncher jobLauncher;
@Autowired
@Qualifier("vesselAggregationJob")
private Job vesselAggregationJob;
@Autowired
@Qualifier("vesselTrackAggregationJob")
private Job vesselTrackAggregationJob;
@ -45,55 +41,41 @@ public class VesselBatchScheduler {
@Qualifier("dailyAggregationJob")
private Job dailyAggregationJob;
@Autowired(required = false)
@Qualifier("aisTargetImportJob")
private Job aisTargetImportJob;
@Value("${vessel.batch.scheduler.enabled:true}")
private boolean schedulerEnabled;
@Value("${vessel.batch.scheduler.incremental.delay-minutes:2}")
private int incrementalDelayMinutes;
@Value("${vessel.batch.abnormal-detection.enabled:true}")
private boolean abnormalDetectionEnabled;
/**
* 5분 단위 증분 처리 (3분 지연으로 데이터 수집 대기)
* 5분마다 실행 (0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55분)
* S&P AIS API 수집 ( 1분 15초)
* 캐시에 최신 위치 저장 5분 집계 Job에서 활용
*/
@Scheduled(cron = "0 3,8,13,18,23,28,33,38,43,48,53,58 * * * *")
public void runIncrementalAggregation() {
if (!schedulerEnabled) {
log.debug("Scheduler is disabled");
@Scheduled(cron = "15 * * * * *")
public void runAisTargetImport() {
if (!schedulerEnabled || aisTargetImportJob == null) {
return;
}
try {
// 3분 데이터를 처리 (데이터 수집 지연 고려)
LocalDateTime now = LocalDateTime.now();
LocalDateTime endTime = now.minusMinutes(incrementalDelayMinutes);
LocalDateTime startTime = endTime.minusMinutes(5);
log.info("Starting incremental aggregation for period: {} to {}", startTime, endTime);
JobParameters params = new JobParametersBuilder()
.addString("startTime", startTime.withNano(0).toString())
.addString("endTime", endTime.withNano(0).toString())
.addString("jobType", "INCREMENTAL")
.addString("timeBucketMinutes", "5") // 5분 단위 집계
// executionTime 제거 - startTime/endTime만으로 고유성 보장
.addString("executionTime", now.toString())
.toJobParameters();
JobExecution execution = jobLauncher.run(vesselAggregationJob, params);
log.info("Incremental aggregation started with execution ID: {}", execution.getId());
JobExecution execution = jobLauncher.run(aisTargetImportJob, params);
log.debug("[AIS Import] 실행 ID: {}", execution.getId());
} catch (JobExecutionAlreadyRunningException e) {
log.warn("Previous incremental job is still running, skipping this execution");
log.warn("[AIS Import] 이전 Job 실행 중, 스킵");
} catch (Exception e) {
log.error("Failed to start incremental aggregation", e);
// 중복 오류인 경우 경고로만 처리
if (e.getMessage().contains("중복된 키") || e.getMessage().contains("duplicate key")) {
log.warn("Duplicate key detected, job may have already processed this time bucket");
}
log.error("[AIS Import] Job 실행 실패", e);
}
}
//
/**
* 5분 단위 궤적 집계 처리 (4분 지연으로 위치 집계 이후 실행)
@ -118,7 +100,7 @@ public class VesselBatchScheduler {
try {
// 4분 데이터를 처리 (위치 집계 완료 )
LocalDateTime now = LocalDateTime.now();
LocalDateTime endTime = now.minusMinutes(incrementalDelayMinutes + 1); // 3+1=4분 지연
LocalDateTime endTime = now.minusMinutes(4); // 4분 지연 (캐시 기반이므로 고정)
LocalDateTime startTime = endTime.minusMinutes(5);
// 5분 버킷 계산
@ -180,7 +162,7 @@ public class VesselBatchScheduler {
JobParameters params = new JobParametersBuilder()
.addString("startTime", startTime.toString())
.addString("endTime", endTime.toString())
.addString("timeBucket", "hourly")
.addString("timeBucket", startTime.toString())
.addString("executionTime", now.toString())
.addString("enableAbnormalDetection", String.valueOf(abnormalDetectionEnabled))
.toJobParameters();

파일 보기

@ -1,194 +0,0 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.domain.vessel.dto.RecentVesselPositionDto;
import gc.mda.signal_batch.domain.vessel.service.VesselLatestPositionCache;
import gc.mda.signal_batch.global.util.ShipKindCodeConverter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.math.BigDecimal;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.util.List;
/**
* 선박 최신 위치 캐시 갱신 스케줄러
*
* 실행 주기: 1분마다 (매분 0초)
* 데이터 소스: Collect DB (sig_test 테이블)
* 처리 방식: 읽기 전용 (DB에 쓰기 없음, 캐시만 업데이트)
*
* 동작 흐름:
* 1. 매분 0초에 실행
* 2. 최근 2분치 데이터를 DB에서 조회 (수집 지연 고려)
* 3. DISTINCT ON으로 선박별 최신 위치만 추출
* 4. 캐시에 업데이트
*
* 기존 배치와의 관계:
* - 기존 5분 배치는 그대로 유지 (DB 저장)
* - 스케줄러는 캐시만 관리 (읽기 전용)
* - 충돌 없음
*/
@Slf4j
@Component
@Profile("!query") // query 프로파일에서는 캐시 갱신 스케줄러 비활성화
@RequiredArgsConstructor
@ConditionalOnProperty(name = "vessel.batch.cache.latest-position.enabled", havingValue = "true", matchIfMissing = false)
public class VesselPositionCacheRefreshScheduler {
@Qualifier("collectJdbcTemplate")
private final JdbcTemplate collectJdbcTemplate;
private final VesselLatestPositionCache cache;
@Value("${vessel.batch.cache.latest-position.refresh-interval-minutes:2}")
private int refreshIntervalMinutes;
private volatile boolean isRunning = false;
/**
* 1분마다 캐시 갱신
* 매분 0초에 실행 (: 10:00:00, 10:01:00, 10:02:00...)
*/
@Scheduled(cron = "0 * * * * *")
public void refreshCache() {
// 동시 실행 방지
if (isRunning) {
log.warn("Previous cache refresh is still running, skipping this execution");
return;
}
isRunning = true;
long startTime = System.currentTimeMillis();
try {
// 최근 N분치 데이터 조회 (수집 지연 고려)
List<RecentVesselPositionDto> positions = fetchLatestPositions();
if (positions.isEmpty()) {
log.warn("No vessel positions found in last {} minutes", refreshIntervalMinutes);
return;
}
// 캐시 업데이트
cache.putAll(positions);
long duration = System.currentTimeMillis() - startTime;
log.info("Cache refresh completed in {}ms (fetched {} positions from DB)",
duration, positions.size());
// 캐시 통계 로깅 (5분마다만)
if (LocalDateTime.now().getMinute() % 5 == 0) {
logCacheStats();
}
} catch (Exception e) {
log.error("Failed to refresh cache", e);
} finally {
isRunning = false;
}
}
/**
* DB에서 최신 위치 데이터 조회
*/
private List<RecentVesselPositionDto> fetchLatestPositions() {
LocalDateTime endTime = LocalDateTime.now();
LocalDateTime startTime = endTime.minusMinutes(refreshIntervalMinutes);
String sql = """
SELECT DISTINCT ON (sig_src_cd, target_id)
sig_src_cd,
target_id,
lon,
lat,
sog,
cog,
ship_nm,
ship_ty,
message_time as last_update
FROM signal.sig_test
WHERE message_time >= ? AND message_time < ?
AND sig_src_cd != '000005'
AND length(target_id) > 5
AND lat BETWEEN -90 AND 90
AND lon BETWEEN -180 AND 180
ORDER BY sig_src_cd, target_id, message_time DESC
""";
try {
return collectJdbcTemplate.query(sql,
new Object[]{Timestamp.valueOf(startTime), Timestamp.valueOf(endTime)},
new VesselPositionRowMapper());
} catch (Exception e) {
log.error("Failed to fetch positions from DB", e);
return List.of();
}
}
/**
* 캐시 통계 로깅
*/
private void logCacheStats() {
try {
VesselLatestPositionCache.CacheStats stats = cache.getStats();
log.info("Cache Stats - Size: {}, HitRate: {}%, MissRate: {}%, Hits: {}, Misses: {}",
stats.currentSize(),
String.format("%.2f", stats.hitRate()),
String.format("%.2f", stats.missRate()),
stats.hitCount(),
stats.missCount());
} catch (Exception e) {
log.warn("Failed to get cache stats", e);
}
}
/**
* RowMapper 구현
*/
private static class VesselPositionRowMapper implements RowMapper<RecentVesselPositionDto> {
@Override
public RecentVesselPositionDto mapRow(ResultSet rs, int rowNum) throws SQLException {
String sigSrcCd = rs.getString("sig_src_cd");
String targetId = rs.getString("target_id");
String shipTy = rs.getString("ship_ty");
// shipKindCode 계산
String shipKindCode = ShipKindCodeConverter.getShipKindCode(sigSrcCd, shipTy);
// nationalCode 계산
String nationalCode;
if ("000001".equals(sigSrcCd) && targetId != null && targetId.length() >= 3) {
nationalCode = targetId.substring(0, 3);
} else {
nationalCode = "440"; // 기본값
}
return RecentVesselPositionDto.builder()
.sigSrcCd(sigSrcCd)
.targetId(targetId)
.lon(rs.getDouble("lon"))
.lat(rs.getDouble("lat"))
.sog(rs.getBigDecimal("sog"))
.cog(rs.getBigDecimal("cog"))
.shipNm(rs.getString("ship_nm"))
.shipTy(shipTy)
.shipKindCode(shipKindCode)
.nationalCode(nationalCode)
.lastUpdate(rs.getTimestamp("last_update") != null ?
rs.getTimestamp("last_update").toLocalDateTime() : null)
.build();
}
}
}

파일 보기

@ -0,0 +1,267 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.batch.core.step.builder.StepBuilder;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.transaction.PlatformTransactionManager;
import javax.sql.DataSource;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.util.*;
/**
* HourlyJob 편승: 정적 정보 COALESCE + CDC t_vessel_static INSERT
*
* 전략:
* 1. COALESCE: 캐시에서 직전 1시간 데이터 필드별 lastNonEmpty 조합
* 2. CDC: 이전 저장 레코드와 비교 변경 시에만 INSERT
*
* 조회: WHERE mmsi=? AND time_bucket <= ? ORDER BY time_bucket DESC LIMIT 1
*/
@Slf4j
@Configuration
@Profile("!query")
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class VesselStaticStepConfig {
private final JobRepository jobRepository;
private final DataSource queryDataSource;
private final PlatformTransactionManager transactionManager;
private final AisTargetCacheManager cacheManager;
public VesselStaticStepConfig(
JobRepository jobRepository,
@Qualifier("queryDataSource") DataSource queryDataSource,
@Qualifier("queryTransactionManager") PlatformTransactionManager transactionManager,
AisTargetCacheManager cacheManager) {
this.jobRepository = jobRepository;
this.queryDataSource = queryDataSource;
this.transactionManager = transactionManager;
this.cacheManager = cacheManager;
}
@Bean
public Step vesselStaticSyncStep() {
return new StepBuilder("vesselStaticSyncStep", jobRepository)
.tasklet((contribution, chunkContext) -> {
long stepStart = System.currentTimeMillis();
// 1. 캐시에서 전체 데이터 MMSI별 그룹
Collection<AisTargetEntity> allEntities = cacheManager.getAllValues();
if (allEntities.isEmpty()) {
log.debug("캐시에 데이터 없음 — t_vessel_static 동기화 스킵");
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}
// 시간 버킷: 현재 시각의 정각
LocalDateTime hourBucket = LocalDateTime.now()
.withMinute(0).withSecond(0).withNano(0);
// MMSI별 최신 데이터 (필드별 COALESCE)
Map<String, AisTargetEntity> coalesced = coalesceByMmsi(allEntities);
JdbcTemplate jdbcTemplate = new JdbcTemplate(queryDataSource);
Timestamp hourBucketTs = Timestamp.valueOf(hourBucket);
// 2. CDC: bulk SELECT로 이전 레코드 전체 조회 (N+1 1회)
Map<String, Map<String, Object>> prevRecords = bulkFetchPreviousRecords(
jdbcTemplate, hourBucketTs);
log.info("t_vessel_static CDC 비교 시작 — 현재: {} 선박, 이전: {} 레코드",
coalesced.size(), prevRecords.size());
// 3. 인메모리 비교 변경 시에만 INSERT
String insertSql = """
INSERT INTO signal.t_vessel_static (
mmsi, time_bucket, imo, name, callsign,
vessel_type, extra_info, length, width, draught,
destination, eta, status, signal_kind_code, class_type
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (mmsi, time_bucket) DO UPDATE SET
imo = EXCLUDED.imo,
name = EXCLUDED.name,
callsign = EXCLUDED.callsign,
vessel_type = EXCLUDED.vessel_type,
extra_info = EXCLUDED.extra_info,
length = EXCLUDED.length,
width = EXCLUDED.width,
draught = EXCLUDED.draught,
destination = EXCLUDED.destination,
eta = EXCLUDED.eta,
status = EXCLUDED.status,
signal_kind_code = EXCLUDED.signal_kind_code,
class_type = EXCLUDED.class_type
""";
int inserted = 0;
int skipped = 0;
List<Object[]> batchArgs = new ArrayList<>();
for (Map.Entry<String, AisTargetEntity> entry : coalesced.entrySet()) {
String mmsi = entry.getKey();
AisTargetEntity current = entry.getValue();
Map<String, Object> prev = prevRecords.get(mmsi);
boolean changed = (prev == null) || hasStaticInfoChanged(current, prev);
if (changed) {
Timestamp etaTs = current.getEta() != null
? Timestamp.from(current.getEta().toInstant())
: null;
batchArgs.add(new Object[] {
mmsi, hourBucketTs,
current.getImo(), current.getName(), current.getCallsign(),
current.getVesselType(), current.getExtraInfo(),
current.getLength(), current.getWidth(), current.getDraught(),
current.getDestination(), etaTs, current.getStatus(),
current.getSignalKindCode(), current.getClassType()
});
inserted++;
} else {
skipped++;
}
}
if (!batchArgs.isEmpty()) {
jdbcTemplate.batchUpdate(insertSql, batchArgs);
}
long elapsed = System.currentTimeMillis() - stepStart;
log.info("t_vessel_static 동기화 완료: 총 {} 선박, INSERT {} 건, CDC 스킵 {} 건 ({}ms)",
coalesced.size(), inserted, skipped, elapsed);
return org.springframework.batch.repeat.RepeatStatus.FINISHED;
}, transactionManager)
.build();
}
/**
* DISTINCT ON (mmsi) 전체 이전 레코드를 1회 bulk 조회
* N+1 개별 SELECT 1회 bulk SELECT로 최적화
*/
private Map<String, Map<String, Object>> bulkFetchPreviousRecords(
JdbcTemplate jdbcTemplate, Timestamp hourBucketTs) {
String sql = """
SELECT DISTINCT ON (mmsi)
mmsi, imo, name, callsign, vessel_type, extra_info,
length, width, draught, destination, status,
signal_kind_code, class_type
FROM signal.t_vessel_static
WHERE time_bucket <= ?
ORDER BY mmsi, time_bucket DESC
""";
Map<String, Map<String, Object>> result = new HashMap<>();
jdbcTemplate.query(sql, rs -> {
Map<String, Object> row = new HashMap<>();
row.put("imo", rs.getObject("imo"));
row.put("name", rs.getString("name"));
row.put("callsign", rs.getString("callsign"));
row.put("vessel_type", rs.getString("vessel_type"));
row.put("extra_info", rs.getString("extra_info"));
row.put("length", rs.getObject("length"));
row.put("width", rs.getObject("width"));
row.put("draught", rs.getObject("draught"));
row.put("destination", rs.getString("destination"));
row.put("status", rs.getString("status"));
row.put("signal_kind_code", rs.getString("signal_kind_code"));
row.put("class_type", rs.getString("class_type"));
result.put(rs.getString("mmsi"), row);
}, hourBucketTs);
return result;
}
/**
* MMSI별 필드 COALESCE: 필드별 마지막 non-empty 조합
*/
private Map<String, AisTargetEntity> coalesceByMmsi(Collection<AisTargetEntity> entities) {
Map<String, AisTargetEntity> result = new LinkedHashMap<>();
for (AisTargetEntity entity : entities) {
if (entity.getMmsi() == null) continue;
result.merge(entity.getMmsi(), entity, (existing, incoming) -> {
// 최신 타임스탬프 기준, 필드별 non-empty 우선
return AisTargetEntity.builder()
.mmsi(existing.getMmsi())
.imo(coalesce(incoming.getImo(), existing.getImo()))
.name(coalesceStr(incoming.getName(), existing.getName()))
.callsign(coalesceStr(incoming.getCallsign(), existing.getCallsign()))
.vesselType(coalesceStr(incoming.getVesselType(), existing.getVesselType()))
.extraInfo(coalesceStr(incoming.getExtraInfo(), existing.getExtraInfo()))
.length(coalesce(incoming.getLength(), existing.getLength()))
.width(coalesce(incoming.getWidth(), existing.getWidth()))
.draught(coalesce(incoming.getDraught(), existing.getDraught()))
.destination(coalesceStr(incoming.getDestination(), existing.getDestination()))
.eta(coalesce(incoming.getEta(), existing.getEta()))
.status(coalesceStr(incoming.getStatus(), existing.getStatus()))
.signalKindCode(coalesceStr(incoming.getSignalKindCode(), existing.getSignalKindCode()))
.classType(coalesceStr(incoming.getClassType(), existing.getClassType()))
.messageTimestamp(coalesce(incoming.getMessageTimestamp(), existing.getMessageTimestamp()))
.build();
});
}
return result;
}
/**
* CDC: 정적 정보 변경 여부 비교
*/
private boolean hasStaticInfoChanged(AisTargetEntity current, Map<String, Object> prev) {
return !Objects.equals(current.getImo(), toLong(prev.get("imo")))
|| !Objects.equals(current.getName(), prev.get("name"))
|| !Objects.equals(current.getCallsign(), prev.get("callsign"))
|| !Objects.equals(current.getVesselType(), prev.get("vessel_type"))
|| !Objects.equals(current.getExtraInfo(), prev.get("extra_info"))
|| !Objects.equals(current.getLength(), toInt(prev.get("length")))
|| !Objects.equals(current.getWidth(), toInt(prev.get("width")))
|| !Objects.equals(current.getDraught(), toDouble(prev.get("draught")))
|| !Objects.equals(current.getDestination(), prev.get("destination"))
|| !Objects.equals(current.getStatus(), prev.get("status"))
|| !Objects.equals(current.getSignalKindCode(), prev.get("signal_kind_code"))
|| !Objects.equals(current.getClassType(), prev.get("class_type"));
}
private <T> T coalesce(T a, T b) {
return a != null ? a : b;
}
private String coalesceStr(String a, String b) {
return (a != null && !a.isBlank()) ? a : b;
}
private Long toLong(Object val) {
if (val == null) return null;
if (val instanceof Long l) return l;
if (val instanceof Number n) return n.longValue();
return null;
}
private Integer toInt(Object val) {
if (val == null) return null;
if (val instanceof Integer i) return i;
if (val instanceof Number n) return n.intValue();
return null;
}
private Double toDouble(Object val) {
if (val == null) return null;
if (val instanceof Double d) return d;
if (val instanceof Number n) return n.doubleValue();
return null;
}
}

파일 보기

@ -1,6 +1,6 @@
package gc.mda.signal_batch.batch.job;
import gc.mda.signal_batch.global.util.VesselTrackDataJobListener;
import gc.mda.signal_batch.batch.listener.CacheBasedTrackJobListener;
import gc.mda.signal_batch.batch.listener.JobCompletionListener;
import gc.mda.signal_batch.batch.listener.PerformanceOptimizationListener;
import lombok.RequiredArgsConstructor;
@ -25,8 +25,9 @@ public class VesselTrackAggregationJobConfig {
private final JobRepository jobRepository;
private final VesselTrackStepConfig vesselTrackStepConfig;
private final AisPositionSyncStepConfig aisPositionSyncStepConfig;
private final JobCompletionListener jobCompletionListener;
private final VesselTrackDataJobListener vesselTrackDataJobListener;
private final CacheBasedTrackJobListener cacheBasedTrackJobListener;
private final PerformanceOptimizationListener performanceOptimizationListener;
@Bean
@ -35,11 +36,12 @@ public class VesselTrackAggregationJobConfig {
.incrementer(new RunIdIncrementer())
.validator(trackJobParametersValidator())
.listener(jobCompletionListener)
.listener(vesselTrackDataJobListener)
.listener(cacheBasedTrackJobListener)
.listener(performanceOptimizationListener) // 성능 최적화 리스너 추가
.start(vesselTrackStepConfig.vesselTrackStep())
.next(vesselTrackStepConfig.gridTrackSummaryStep())
.next(vesselTrackStepConfig.areaTrackSummaryStep())
.next(aisPositionSyncStepConfig.aisPositionSyncStep())
.build();
}

파일 보기

@ -7,8 +7,9 @@ import gc.mda.signal_batch.domain.vessel.service.VesselPreviousBucketCache;
import gc.mda.signal_batch.batch.processor.VesselTrackProcessor;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector.AbnormalDetectionResult;
import gc.mda.signal_batch.batch.reader.InMemoryVesselTrackDataReader;
import gc.mda.signal_batch.global.util.VesselTrackDataHolder;
import gc.mda.signal_batch.batch.reader.AisTargetCacheManager;
import gc.mda.signal_batch.batch.reader.CacheBasedVesselTrackDataReader;
import gc.mda.signal_batch.batch.reader.FiveMinTrackCache;
import gc.mda.signal_batch.global.util.TrackClippingUtils;
import gc.mda.signal_batch.batch.writer.VesselTrackBulkWriter;
import gc.mda.signal_batch.batch.writer.AbnormalTrackWriter;
@ -34,11 +35,14 @@ import org.springframework.context.annotation.Profile;
import javax.sql.DataSource;
import java.sql.Timestamp;
import java.time.Duration;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
import jakarta.annotation.PostConstruct;
@ -53,37 +57,40 @@ public class VesselTrackStepConfig {
private final PlatformTransactionManager transactionManager;
private final DataSource queryDataSource;
private final VesselTrackProcessor vesselTrackProcessor;
private final VesselTrackDataHolder vesselTrackDataHolder;
private final AisTargetCacheManager aisTargetCacheManager;
private final VesselTrackBulkWriter vesselTrackBulkWriter;
private final TrackClippingUtils trackClippingUtils;
private final AbnormalTrackDetector abnormalTrackDetector;
private final AbnormalTrackWriter abnormalTrackWriter;
private final VesselPreviousBucketCache previousBucketCache;
private final FiveMinTrackCache fiveMinTrackCache;
// 현재 처리 중인 버킷의 종료 위치 저장 (캐시 업데이트용)
private final Map<String, VesselBucketPositionDto> currentBucketEndPositions = new HashMap<>();
private final Map<String, VesselBucketPositionDto> currentBucketEndPositions = new ConcurrentHashMap<>();
public VesselTrackStepConfig(
JobRepository jobRepository,
PlatformTransactionManager transactionManager,
@Qualifier("queryDataSource") DataSource queryDataSource,
VesselTrackProcessor vesselTrackProcessor,
VesselTrackDataHolder vesselTrackDataHolder,
AisTargetCacheManager aisTargetCacheManager,
VesselTrackBulkWriter vesselTrackBulkWriter,
TrackClippingUtils trackClippingUtils,
AbnormalTrackDetector abnormalTrackDetector,
AbnormalTrackWriter abnormalTrackWriter,
VesselPreviousBucketCache previousBucketCache) {
VesselPreviousBucketCache previousBucketCache,
FiveMinTrackCache fiveMinTrackCache) {
this.jobRepository = jobRepository;
this.transactionManager = transactionManager;
this.queryDataSource = queryDataSource;
this.vesselTrackProcessor = vesselTrackProcessor;
this.vesselTrackDataHolder = vesselTrackDataHolder;
this.aisTargetCacheManager = aisTargetCacheManager;
this.vesselTrackBulkWriter = vesselTrackBulkWriter;
this.trackClippingUtils = trackClippingUtils;
this.abnormalTrackDetector = abnormalTrackDetector;
this.abnormalTrackWriter = abnormalTrackWriter;
this.previousBucketCache = previousBucketCache;
this.fiveMinTrackCache = fiveMinTrackCache;
}
@Value("${vessel.batch.chunk-size:1000}")
@ -108,8 +115,8 @@ public class VesselTrackStepConfig {
@Bean
@StepScope
public InMemoryVesselTrackDataReader trackDataReader() {
return new InMemoryVesselTrackDataReader(vesselTrackDataHolder, chunkSize);
public CacheBasedVesselTrackDataReader trackDataReader() {
return new CacheBasedVesselTrackDataReader(aisTargetCacheManager);
}
@Bean
@ -124,7 +131,7 @@ public class VesselTrackStepConfig {
// 2. 이전 버킷 위치 조회 (캐시 + DB Fallback)
List<String> vesselKeys = tracks.stream()
.map(track -> track.getSigSrcCd() + ":" + track.getTargetId())
.map(VesselTrack::getMmsi)
.distinct()
.collect(Collectors.toList());
@ -133,15 +140,21 @@ public class VesselTrackStepConfig {
// 3. 강화된 비정상 궤적 필터링 (버킷 + 버킷 점프 검출)
List<VesselTrack> filteredTracks = new ArrayList<>();
LocalDateTime staleCutoff = LocalDateTime.now().toLocalDate().atStartOfDay();
for (VesselTrack track : tracks) {
// Stale 데이터 감지 비정상 궤적으로 전환 (정상 집계에서 누락)
if (track.getTimeBucket() != null && track.getTimeBucket().isBefore(staleCutoff)) {
saveStaleAbnormalTrack(track);
continue;
}
boolean isAbnormal = false;
String abnormalReason = "";
// 선박/항공기 구분
boolean isAircraft = "000019".equals(track.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0; // 항공기 300, 선박 100
double distanceLimit = isAircraft ? 30.0 : 10.0; // 항공기 30nm, 선박 10nm
// S&P AIS API는 선박 전용 항공기 구분 불필요
double speedLimit = 100.0;
double distanceLimit = 10.0;
// 버킷 평균속도 체크
if (track.getAvgSpeed() != null && track.getAvgSpeed().doubleValue() >= speedLimit) {
@ -155,9 +168,9 @@ public class VesselTrackStepConfig {
abnormalReason = "within_bucket_distance";
}
// 버킷 점프 검출 (NEW!)
// 버킷 점프 검출
if (!isAbnormal && track.getStartPosition() != null) {
String vesselKey = track.getSigSrcCd() + ":" + track.getTargetId();
String vesselKey = track.getMmsi();
VesselBucketPositionDto prevPosition = previousPositions.get(vesselKey);
if (prevPosition != null) {
@ -166,10 +179,9 @@ public class VesselTrackStepConfig {
track.getStartPosition().getLat(), track.getStartPosition().getLon()
);
// 위성 AIS는 2시간, 일반 신호는 15분 범위 체크
boolean isSatellite = "000016".equals(track.getSigSrcCd());
double maxGapMinutes = isSatellite ? 120.0 : 15.0;
double expectedMaxDistance = isAircraft ? (maxGapMinutes / 60.0 * 300.0) : (maxGapMinutes / 60.0 * 50.0);
// S&P AIS API: 위성/지상 구분 불가 보수적 30분 gap 허용
double maxGapMinutes = 30.0;
double expectedMaxDistance = maxGapMinutes / 60.0 * 50.0;
if (jumpDistance > expectedMaxDistance) {
isAbnormal = true;
@ -196,10 +208,8 @@ public class VesselTrackStepConfig {
// 정상 궤적의 종료 위치 저장 (캐시 업데이트용)
if (track.getEndPosition() != null) {
String vesselKey = track.getSigSrcCd() + ":" + track.getTargetId();
currentBucketEndPositions.put(vesselKey, VesselBucketPositionDto.builder()
.sigSrcCd(track.getSigSrcCd())
.targetId(track.getTargetId())
currentBucketEndPositions.put(track.getMmsi(), VesselBucketPositionDto.builder()
.mmsi(track.getMmsi())
.endLon(track.getEndPosition().getLon())
.endLat(track.getEndPosition().getLat())
.endTime(track.getEndPosition().getTime())
@ -232,15 +242,14 @@ public class VesselTrackStepConfig {
abnormalTrackWriter.setJobName("vesselTrackAggregationJob");
List<AbnormalTrackDetector.AbnormalSegment> segments = new ArrayList<>();
Map<String, Object> details = new HashMap<>();
Map<String, Object> details = new ConcurrentHashMap<>();
details.put("avgSpeed", track.getAvgSpeed());
details.put("distanceNm", track.getDistanceNm());
details.put("timeBucket", track.getTimeBucket());
// 선박/항공기 구분
boolean isAircraft = "000019".equals(track.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0;
double distanceLimit = isAircraft ? 30.0 : 10.0;
// S&P AIS API는 선박 전용
double speedLimit = 100.0;
double distanceLimit = 10.0;
// 비정상 유형 결정
String abnormalType = "abnormal_5min";
@ -283,6 +292,70 @@ public class VesselTrackStepConfig {
}
}
/**
* Stale 데이터(오늘 이전 time_bucket) 비정상 궤적으로 전환 저장
* - time_bucket: 현재 5분 버킷으로 오버라이드 (파티션 존재 보장)
* - abnormal_type: stale_timestamp
* - details: 원본 time_bucket, 지연 시간(/), 속도/거리
*/
private void saveStaleAbnormalTrack(VesselTrack track) {
LocalDateTime now = LocalDateTime.now();
LocalDateTime currentBucket = now.withSecond(0).withNano(0)
.minusMinutes(now.getMinute() % 5);
LocalDateTime originalTimeBucket = track.getTimeBucket();
long delayMinutes = Duration.between(originalTimeBucket, now).toMinutes();
VesselTrack staleTrack = VesselTrack.builder()
.mmsi(track.getMmsi())
.timeBucket(currentBucket)
.trackGeom(track.getTrackGeom())
.distanceNm(track.getDistanceNm())
.avgSpeed(track.getAvgSpeed())
.maxSpeed(track.getMaxSpeed())
.pointCount(track.getPointCount())
.startPosition(track.getStartPosition())
.endPosition(track.getEndPosition())
.build();
log.info("Stale 데이터 비정상 전환: MMSI={}, 원본={}, 현재={}, 지연={}분",
track.getMmsi(), originalTimeBucket, currentBucket, delayMinutes);
Map<String, Object> details = new HashMap<>();
details.put("originalTimeBucket", originalTimeBucket.toString());
details.put("currentTimeBucket", currentBucket.toString());
details.put("delayMinutes", delayMinutes);
details.put("delayHours", delayMinutes / 60);
if (track.getAvgSpeed() != null) details.put("avgSpeed", track.getAvgSpeed());
if (track.getDistanceNm() != null) details.put("distanceNm", track.getDistanceNm());
if (track.getPointCount() != null) details.put("pointCount", track.getPointCount());
List<AbnormalTrackDetector.AbnormalSegment> segments = List.of(
AbnormalTrackDetector.AbnormalSegment.builder()
.type("stale_timestamp")
.startIndex(0)
.endIndex(track.getPointCount() != null
? Math.max(track.getPointCount() - 1, 0) : 0)
.actualValue(delayMinutes)
.threshold(0)
.description(String.format("Stale 데이터: 원본 %s, 지연 %d분 (%d시간)",
originalTimeBucket, delayMinutes, delayMinutes / 60))
.details(details)
.build());
AbnormalDetectionResult result = AbnormalDetectionResult.builder()
.originalTrack(staleTrack)
.correctedTrack(null)
.abnormalSegments(segments)
.hasAbnormalities(true)
.build();
try {
abnormalTrackWriter.write(new Chunk<>(List.of(result)));
} catch (Exception e) {
log.error("Stale 비정상 궤적 저장 실패: MMSI={}", track.getMmsi(), e);
}
}
// CompositeItemWriter로 3개 테이블에 동시 저장
@Bean
@StepScope
@ -304,7 +377,18 @@ public class VesselTrackStepConfig {
// 1. 기존 Writer로 DB 저장
vesselTrackBulkWriter.write(chunk);
// 2. 캐시 업데이트 (현재 버킷 종료 위치)
// 2. FiveMinTrackCache에 저장 (hourly 인메모리 병합용)
int cachedCount = 0;
for (List<VesselTrack> trackGroup : chunk.getItems()) {
fiveMinTrackCache.putAll(trackGroup);
cachedCount += trackGroup.size();
}
if (cachedCount > 0) {
log.debug("FiveMinTrackCache 저장: {} 건 (총 캐시: {} 건)",
cachedCount, fiveMinTrackCache.size());
}
// 3. 이전 버킷 종료 위치 캐시 업데이트
if (!currentBucketEndPositions.isEmpty()) {
List<VesselBucketPositionDto> positions = new ArrayList<>(currentBucketEndPositions.values());
previousBucketCache.putAll(positions);
@ -339,17 +423,16 @@ public class VesselTrackStepConfig {
String sql = """
INSERT INTO signal.t_grid_vessel_tracks (
haegu_no, sig_src_cd, target_id, time_bucket,
haegu_no, mmsi, time_bucket,
distance_nm, avg_speed, point_count, entry_time, exit_time
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (haegu_no, sig_src_cd, target_id, time_bucket) DO NOTHING
) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (haegu_no, mmsi, time_bucket) DO NOTHING
""";
List<Object[]> args = allClippedTracks.stream()
.map(track -> new Object[] {
track.getHaeguNo(),
track.getSigSrcCd(),
track.getTargetId(),
track.getMmsi(),
Timestamp.valueOf(track.getTimeBucket()),
track.getDistanceNm(),
track.getAvgSpeed(),
@ -385,17 +468,16 @@ public class VesselTrackStepConfig {
String sql = """
INSERT INTO signal.t_area_vessel_tracks (
area_id, sig_src_cd, target_id, time_bucket,
area_id, mmsi, time_bucket,
distance_nm, avg_speed, point_count, metrics
) VALUES (?, ?, ?, ?, ?, ?, ?, ?::jsonb)
ON CONFLICT (area_id, sig_src_cd, target_id, time_bucket) DO NOTHING
) VALUES (?, ?, ?, ?, ?, ?, ?::jsonb)
ON CONFLICT (area_id, mmsi, time_bucket) DO NOTHING
""";
List<Object[]> args = allClippedTracks.stream()
.map(track -> new Object[] {
track.getAreaId(),
track.getSigSrcCd(),
track.getTargetId(),
track.getMmsi(),
Timestamp.valueOf(track.getTimeBucket()),
track.getDistanceNm(),
track.getAvgSpeed(),
@ -422,12 +504,11 @@ public class VesselTrackStepConfig {
SELECT
haegu_no,
time_bucket,
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list
@ -466,12 +547,11 @@ public class VesselTrackStepConfig {
SELECT
area_id,
time_bucket,
COUNT(DISTINCT CONCAT(sig_src_cd, '_', target_id)) as total_vessels,
COUNT(DISTINCT mmsi) as total_vessels,
SUM(distance_nm) as total_distance_nm,
AVG(avg_speed) as avg_speed,
jsonb_agg(jsonb_build_object(
'sig_src_cd', sig_src_cd,
'target_id', target_id,
'mmsi', mmsi,
'distance_nm', distance_nm,
'avg_speed', avg_speed
)) as vessel_list

파일 보기

@ -0,0 +1,52 @@
package gc.mda.signal_batch.batch.listener;
import gc.mda.signal_batch.domain.gis.cache.AreaBoundaryCache;
import gc.mda.signal_batch.domain.vessel.service.VesselPreviousBucketCache;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobExecutionListener;
import org.springframework.batch.core.annotation.AfterJob;
import org.springframework.batch.core.annotation.BeforeJob;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
/**
* 캐시 기반 Track Job 리스너
*
* 기존 VesselTrackDataJobListener 대체:
* - collectDB 데이터 로드 제거 (AisTargetCacheManager로 대체)
* - Area/Haegu 경계 캐시 갱신 유지
* - 이전 버킷 캐시 Fallback 플래그 리셋 유지
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class CacheBasedTrackJobListener implements JobExecutionListener {
private final AreaBoundaryCache areaBoundaryCache;
private final VesselPreviousBucketCache previousBucketCache;
@BeforeJob
public void beforeJob(JobExecution jobExecution) {
// Area/Haegu 경계 캐시 갱신
areaBoundaryCache.refresh();
log.info("Refreshed area boundary cache");
// 이전 버킷 캐시 Fallback 플래그 리셋
previousBucketCache.resetFallbackFlag();
log.info("Reset previous bucket cache fallback flag");
log.info("Cache-based track job started: startTime={}, endTime={}",
jobExecution.getJobParameters().getString("startTime"),
jobExecution.getJobParameters().getString("endTime"));
}
@AfterJob
public void afterJob(JobExecution jobExecution) {
// DB 조회 통계 출력
previousBucketCache.logJobStatistics();
log.debug("Cache-based track job completed");
}
}

파일 보기

@ -29,12 +29,11 @@ public class AbnormalTrackDetector {
// 물리적 한계값 (매우 관대하게 설정)
@SuppressWarnings("unused")
private static final double VESSEL_PHYSICAL_LIMIT_KNOTS = 100.0; // 선박 물리적 한계
@SuppressWarnings("unused")
private static final double AIRCRAFT_PHYSICAL_LIMIT_KNOTS = 600.0; // 항공기 물리적 한계
// 항공기 물리적 한계 S&P AIS API 전환으로 미사용 (선박 전용)
// 명백한 비정상만 검출하기 위한 임계값
private static final double VESSEL_ABNORMAL_SPEED_KNOTS = 500.0; // 선박 비정상 속도 (매우 관대)
private static final double AIRCRAFT_ABNORMAL_SPEED_KNOTS = 800.0; // 항공기 비정상 속도
// 항공기 비정상 속도 S&P AIS API 전환으로 미사용 (선박 전용)
// 시간별 거리 임계값 (제곱근 스케일링 적용)
private static final double BASE_DISTANCE_5MIN_NM = 20.0; // 5분간 기준 거리 (2배로 증가)
@ -46,7 +45,7 @@ public class AbnormalTrackDetector {
private static final long MIN_GAP_FOR_RELAXED_CHECK = 30; // 30분 이상 gap은 완화된 검사
private static final double EARTH_RADIUS_NM = 3440.065;
private static final String AIRCRAFT_SIG_SRC_CD = "000019";
// S&P AIS API는 선박 전용 항공기 구분 불필요
@Data
@Builder
@ -130,9 +129,8 @@ public class AbnormalTrackDetector {
return buildNormalResult(track);
}
// Hourly/Daily에서는 선박/항공기 구분하여 제외
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(track.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0; // 항공기 300, 선박 100
// S&P AIS API는 선박 전용 선박 기준 속도 제한
double speedLimit = 100.0;
boolean shouldExclude = abnormalSegments.stream()
.anyMatch(seg -> seg.getActualValue() > speedLimit);
@ -185,8 +183,7 @@ public class AbnormalTrackDetector {
private List<AbnormalSegment> checkAggregatedMetricsLenient(VesselTrack track) {
List<AbnormalSegment> abnormalSegments = new ArrayList<>();
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(track.getSigSrcCd());
double speedLimit = isAircraft ? AIRCRAFT_ABNORMAL_SPEED_KNOTS : VESSEL_ABNORMAL_SPEED_KNOTS;
double speedLimit = VESSEL_ABNORMAL_SPEED_KNOTS;
// 평균속도가 명백히 비정상인 경우만 검출
if (track.getAvgSpeed() != null && track.getAvgSpeed().doubleValue() > speedLimit) {
@ -259,8 +256,7 @@ public class AbnormalTrackDetector {
double timeScale = Math.sqrt(durationMinutes / 5.0);
double distanceThreshold = BASE_DISTANCE_5MIN_NM * timeScale * 3.0; // 3배 여유
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(currentTrack.getSigSrcCd());
double speedLimit = isAircraft ? AIRCRAFT_ABNORMAL_SPEED_KNOTS : VESSEL_ABNORMAL_SPEED_KNOTS;
double speedLimit = VESSEL_ABNORMAL_SPEED_KNOTS;
// 매우 명백한 비정상만 검출
if (impliedSpeed > speedLimit && distance > distanceThreshold) {
@ -345,9 +341,8 @@ public class AbnormalTrackDetector {
double impliedSpeed = (distance * 60.0) / durationMinutes;
// Hourly/Daily는 선박/항공기 구분하여 처리
boolean isAircraft = AIRCRAFT_SIG_SRC_CD.equals(currentTrack.getSigSrcCd());
double speedLimit = isAircraft ? 300.0 : 100.0;
// S&P AIS API는 선박 전용 항공기 구분 불필요
double speedLimit = 100.0;
if (impliedSpeed > speedLimit) {
Map<String, Object> details = new HashMap<>();

파일 보기

@ -1,190 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.AreaStatistics;
import gc.mda.signal_batch.batch.processor.AreaStatisticsProcessor.VesselMovement;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.AfterStep;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/**
* Area Statistics를 위한 누적 프로세서
* 전체 데이터를 메모리에 누적한 Step 종료 번에 집계
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@StepScope
@RequiredArgsConstructor
public class AccumulatingAreaProcessor implements ItemProcessor<VesselData, AreaStatistics> {
private final AreaStatisticsProcessor areaStatisticsProcessor;
@Value("#{jobParameters['timeBucketMinutes']}")
private Integer timeBucketMinutes;
// area_id + time_bucket별 선박 데이터 누적
private final Map<String, List<VesselData>> dataAccumulator = new ConcurrentHashMap<>();
// 처리 통계
private long processedCount = 0;
private long skippedCount = 0;
@BeforeStep
public void beforeStep(StepExecution stepExecution) {
int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
log.info("AccumulatingAreaProcessor initialized with timeBucket: {} minutes", bucketMinutes);
dataAccumulator.clear();
processedCount = 0;
skippedCount = 0;
}
@Override
public AreaStatistics process(VesselData item) throws Exception {
if (!item.isValidPosition()) {
skippedCount++;
return null;
}
// 메모리에서 속한 구역 찾기
List<String> areaIds = areaStatisticsProcessor.findAreasForPointInMemory(
item.getLat(), item.getLon()
);
if (areaIds.isEmpty()) {
return null;
}
// time bucket 계산
int bucketSize = timeBucketMinutes != null ? timeBucketMinutes : 5;
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketSize) * bucketSize);
// area에 대해 데이터 누적
for (String areaId : areaIds) {
String key = areaId + "||" + bucket.toString(); // 구분자 변경
dataAccumulator.computeIfAbsent(key, k -> new ArrayList<>()).add(item);
}
processedCount++;
// null 반환으로 개별 출력 방지
return null;
}
@AfterStep
public void afterStep(StepExecution stepExecution) {
log.info("Processing accumulated data for {} area-timebucket combinations",
dataAccumulator.size());
log.info("Processed: {}, Skipped: {}", processedCount, skippedCount);
if (dataAccumulator.isEmpty()) {
return;
}
// 누적된 데이터를 기반으로 통계 계산
List<AreaStatistics> allStatistics = new ArrayList<>();
dataAccumulator.forEach((key, vessels) -> {
String[] parts = key.split("\\|\\|", 2); // || 구분자 사용
if (parts.length != 2) {
log.error("Invalid key format: {}", key);
return;
}
String areaId = parts[0];
LocalDateTime timeBucket = LocalDateTime.parse(parts[1]);
AreaStatistics stats = new AreaStatistics(areaId, timeBucket);
Map<String, VesselMovement> vesselMovements = new HashMap<>();
// 선박별로 movement 정보 계산
Map<String, List<VesselData>> vesselGroups = new HashMap<>();
for (VesselData vessel : vessels) {
vesselGroups.computeIfAbsent(vessel.getVesselKey(), k -> new ArrayList<>())
.add(vessel);
}
vesselGroups.forEach((vesselKey, vesselDataList) -> {
// 시간순 정렬
vesselDataList.sort(Comparator.comparing(VesselData::getMessageTime));
VesselMovement movement = new VesselMovement();
movement.setVesselKey(vesselKey);
movement.setEnterTime(vesselDataList.get(0).getMessageTime());
movement.setExitTime(vesselDataList.get(vesselDataList.size() - 1).getMessageTime());
movement.setPointCount(vesselDataList.size());
// 평균 속도 계산
double totalSpeed = 0;
int speedCount = 0;
for (VesselData vd : vesselDataList) {
if (vd.getSog() != null) {
totalSpeed += vd.getSog().doubleValue();
speedCount++;
}
}
if (speedCount > 0) {
movement.setAvgSpeed(BigDecimal.valueOf(totalSpeed / speedCount)
.setScale(2, BigDecimal.ROUND_HALF_UP));
} else {
movement.setAvgSpeed(BigDecimal.ZERO);
}
// 정류/통과 구분 (10분 이상 체류 정류)
long stayMinutes = ChronoUnit.MINUTES.between(
movement.getEnterTime(), movement.getExitTime()
);
if (stayMinutes > 10) {
stats.getStationaryVessels().put(vesselKey, movement);
} else {
stats.getTransitVessels().put(vesselKey, movement);
}
vesselMovements.put(vesselKey, movement);
});
// 통계 최종 계산
stats.setVesselCount(vesselMovements.size());
stats.setInCount(vesselMovements.size()); // 진입 선박
stats.setOutCount(0); // 추후 로직 개선 필요
// 전체 평균 속도
List<BigDecimal> allSpeeds = new ArrayList<>();
vesselMovements.values().stream()
.map(VesselMovement::getAvgSpeed)
.filter(Objects::nonNull)
.forEach(allSpeeds::add);
if (!allSpeeds.isEmpty()) {
BigDecimal totalSpeed = allSpeeds.stream()
.reduce(BigDecimal.ZERO, BigDecimal::add);
stats.setAvgSog(totalSpeed.divide(
BigDecimal.valueOf(allSpeeds.size()), 2, BigDecimal.ROUND_HALF_UP));
} else {
stats.setAvgSog(BigDecimal.ZERO);
}
allStatistics.add(stats);
});
// StepExecution context에 결과 저장
stepExecution.getExecutionContext().put("areaStatistics", allStatistics);
log.info("Calculated statistics for {} areas", allStatistics.size());
}
}

파일 보기

@ -1,206 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.global.util.HaeguGeoUtils;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.AfterStep;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/**
* 전체 데이터를 누적하여 집계하는 프로세서
* Step 실행 모든 데이터를 메모리에 누적하고, Step 완료 번에 출력
*/
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@StepScope
@RequiredArgsConstructor
public class AccumulatingTileProcessor implements ItemProcessor<VesselData, TileStatistics> {
private final HaeguGeoUtils geoUtils;
@Value("#{jobParameters['tileLevel']}")
private Integer tileLevel;
@Value("#{jobParameters['timeBucketMinutes']}")
private Integer timeBucketMinutes;
// 전체 집계를 위한 누적
private final Map<String, TileStatistics> accumulator = new ConcurrentHashMap<>();
// 처리된 레코드 추적
private long processedCount = 0;
private long skippedCount = 0;
@BeforeStep
public void beforeStep(StepExecution stepExecution) {
int level = (tileLevel != null) ? tileLevel : 1;
int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
log.info("Starting AccumulatingTileProcessor - tileLevel: {}, timeBucket: {} minutes",
level, bucketMinutes);
// 초기화
accumulator.clear();
processedCount = 0;
skippedCount = 0;
}
@Override
public TileStatistics process(VesselData item) throws Exception {
if (item == null || !item.isValidPosition()) {
skippedCount++;
return null;
}
processedCount++;
int level = (tileLevel != null) ? tileLevel : 1;
int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketMinutes) * bucketMinutes);
// Level 0 (대해구) 처리
if (level >= 0) {
processLevel0(item, bucket);
}
// Level 1 (소해구) 처리
if (level >= 1) {
processLevel1(item, bucket);
}
// 10000건마다 진행 상황 로그
if (processedCount % 10000 == 0) {
log.debug("Processed {} records, accumulated {} tiles",
processedCount, accumulator.size());
}
// null 반환 - 실제 출력은 AfterStep에서 수행
return null;
}
private void processLevel0(VesselData item, LocalDateTime bucket) {
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String key = generateKey(level0Info.tileId, 0, bucket);
accumulator.compute(key, (k, existing) -> {
if (existing == null) {
existing = TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build();
}
existing.addVesselData(item);
return existing;
});
}
}
private void processLevel1(VesselData item, LocalDateTime bucket) {
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String key = generateKey(level1Info.tileId, 1, bucket);
accumulator.compute(key, (k, existing) -> {
if (existing == null) {
existing = TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build();
}
existing.addVesselData(item);
return existing;
});
}
}
private String generateKey(String tileId, int tileLevel, LocalDateTime timeBucket) {
return String.format("%s|%d|%s", tileId, tileLevel, timeBucket);
}
@AfterStep
public void afterStep(StepExecution stepExecution) {
log.info("AccumulatingTileProcessor completed - processed: {}, skipped: {}, tiles: {}",
processedCount, skippedCount, accumulator.size());
// 밀도 계산
accumulator.values().forEach(this::calculateDensity);
// 메트릭 저장
stepExecution.getExecutionContext().putLong("totalProcessed", processedCount);
stepExecution.getExecutionContext().putLong("totalSkipped", skippedCount);
stepExecution.getExecutionContext().putInt("totalTiles", accumulator.size());
// 위치에서 바로 DB에 저장하면 안됨 - StepListener에서 처리해야
log.info("Accumulated {} tiles ready for writing", accumulator.size());
}
private void calculateDensity(TileStatistics stats) {
if (stats.getVesselCount() == null || stats.getVesselCount() == 0) {
stats.setVesselDensity(BigDecimal.ZERO);
return;
}
double tileArea = geoUtils.getTileArea(stats.getTileId());
if (tileArea > 0) {
BigDecimal density = BigDecimal.valueOf(stats.getVesselCount())
.divide(BigDecimal.valueOf(tileArea), 6, BigDecimal.ROUND_HALF_UP);
stats.setVesselDensity(density);
} else {
stats.setVesselDensity(BigDecimal.ZERO);
}
}
/**
* 누적된 결과 반환 (테스트용)
*/
public List<TileStatistics> getAccumulatedResults() {
log.info("[AccumulatingTileProcessor] getAccumulatedResults called - size: {}", accumulator.size());
return new ArrayList<>(accumulator.values());
}
/**
* 누적 데이터 초기화
*/
public void clear() {
accumulator.clear();
processedCount = 0;
skippedCount = 0;
}
}

파일 보기

@ -0,0 +1,85 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetDto;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.stereotype.Component;
import java.time.OffsetDateTime;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
/**
* AIS Target DTO Entity 변환 Processor
*
* - 타임스탬프 파싱 (ISO 8601)
* - 유효성 필터링 (MMSI, Lat, Lon 필수)
* - gc-signal-batch에서는 mmsi를 String으로 처리
*/
@Slf4j
@Component
public class AisTargetDataProcessor implements ItemProcessor<AisTargetDto, AisTargetEntity> {
private static final DateTimeFormatter ISO_FORMATTER = DateTimeFormatter.ISO_DATE_TIME;
@Override
public AisTargetEntity process(AisTargetDto dto) {
// 유효성 검사: MMSI와 위치 정보 필수
if (dto.getMmsi() == null || dto.getMmsi().isBlank()
|| dto.getLat() == null || dto.getLon() == null) {
log.debug("유효하지 않은 데이터 스킵 - MMSI: {}, Lat: {}, Lon: {}",
dto.getMmsi(), dto.getLat(), dto.getLon());
return null;
}
// MessageTimestamp 파싱
OffsetDateTime messageTimestamp = parseTimestamp(dto.getMessageTimestamp());
if (messageTimestamp == null) {
log.debug("MessageTimestamp 파싱 실패 - MMSI: {}, Timestamp: {}",
dto.getMmsi(), dto.getMessageTimestamp());
return null;
}
return AisTargetEntity.builder()
.mmsi(dto.getMmsi())
.imo(dto.getImo())
.name(dto.getName())
.callsign(dto.getCallsign())
.vesselType(dto.getVesselType())
.extraInfo(dto.getExtraInfo())
.lat(dto.getLat())
.lon(dto.getLon())
.heading(dto.getHeading())
.sog(dto.getSog())
.cog(dto.getCog())
.rot(dto.getRot())
.length(dto.getLength())
.width(dto.getWidth())
.draught(dto.getDraught())
.destination(dto.getDestination())
.eta(parseEta(dto.getEta()))
.status(dto.getStatus())
.messageTimestamp(messageTimestamp)
.build();
}
private OffsetDateTime parseTimestamp(String timestamp) {
if (timestamp == null || timestamp.isEmpty()) {
return null;
}
try {
return OffsetDateTime.parse(timestamp, ISO_FORMATTER);
} catch (DateTimeParseException e) {
log.trace("타임스탬프 파싱 실패: {}", timestamp);
return null;
}
}
private OffsetDateTime parseEta(String eta) {
if (eta == null || eta.isEmpty() || "9999-12-31T23:59:59Z".equals(eta)) {
return null;
}
return parseTimestamp(eta);
}
}

파일 보기

@ -1,333 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.global.util.DataSourceLogger;
import lombok.Data;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.locationtech.jts.geom.*;
import org.locationtech.jts.io.WKTReader;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import jakarta.annotation.PostConstruct;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class AreaStatisticsProcessor {
@Qualifier("queryJdbcTemplate")
private final JdbcTemplate queryJdbcTemplate;
@Qualifier("queryDataSource")
private final DataSource queryDataSource;
// 메모리에 구역 정보 캐싱
private final Map<String, AreaInfo> areaCache = new ConcurrentHashMap<>();
private final List<AreaInfo> areaList = new ArrayList<>();
// JTS 객체들
private final GeometryFactory geometryFactory = new GeometryFactory(new PrecisionModel(), 4326);
private final WKTReader wktReader = new WKTReader(geometryFactory);
@PostConstruct
public void init() {
log.info("========== AreaStatisticsProcessor Initialization ==========");
DataSourceLogger.logJdbcTemplateInfo("AreaStatisticsProcessor", queryJdbcTemplate);
// t_areas 테이블 존재 확인
boolean tableExists = DataSourceLogger.checkTableExists(
"AreaStatisticsProcessor", queryJdbcTemplate, "signal", "t_areas"
);
if (!tableExists) {
log.error("CRITICAL: Table signal.t_areas does not exist in query database!");
log.error("Please run: scripts/sql/create-query-db-schema.sql on the query database");
} else {
// 초기화 구역 정보 로드
loadAreas();
}
log.info("========== End of Initialization ==========");
}
@Data
public static class AreaInfo {
private String areaId;
private String areaName;
private String areaType;
private String geomWkt;
private Geometry geometry; // JTS Geometry 객체
private Envelope envelope; // Bounding Box for quick filtering
}
@Data
public static class AreaStatistics implements java.io.Serializable {
private String areaId;
private LocalDateTime timeBucket;
private Integer vesselCount;
private Integer inCount;
private Integer outCount;
private Map<String, VesselMovement> transitVessels;
private Map<String, VesselMovement> stationaryVessels;
private BigDecimal avgSog;
private LocalDateTime createdAt;
public AreaStatistics(String areaId, LocalDateTime timeBucket) {
this.areaId = areaId;
this.timeBucket = timeBucket;
this.vesselCount = 0;
this.inCount = 0;
this.outCount = 0;
this.transitVessels = new HashMap<>();
this.stationaryVessels = new HashMap<>();
this.avgSog = BigDecimal.ZERO;
}
}
@Data
public static class VesselMovement implements java.io.Serializable {
private String vesselKey;
private LocalDateTime enterTime;
private LocalDateTime exitTime;
private BigDecimal avgSpeed;
private Integer pointCount;
}
@StepScope
public ItemProcessor<List<VesselData>, List<AreaStatistics>> batchProcessor() {
return batchProcessor(null);
}
@StepScope
public ItemProcessor<List<VesselData>, List<AreaStatistics>> batchProcessor(
@Value("#{jobParameters['timeBucketMinutes']}") Integer bucketMinutes) {
return items -> {
// 구역 정보가 없으면 결과 반환
if (areaList.isEmpty()) {
log.warn("No areas loaded, skipping area statistics processing");
return new ArrayList<>();
}
Map<String, AreaStatistics> statsMap = new HashMap<>();
Map<String, Map<String, VesselMovement>> vesselTracker = new HashMap<>();
for (VesselData item : items) {
if (!item.isValidPosition()) {
continue;
}
// 메모리에서 속한 구역 찾기 (DB 쿼리 없음!)
List<String> areaIds = findAreasForPointInMemory(item.getLat(), item.getLon());
int bucketSize = bucketMinutes != null ? bucketMinutes : 5; // 5분 단위로 변경
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketSize) * bucketSize);
for (String areaId : areaIds) {
String statsKey = areaId + "_" + bucket.toString();
AreaStatistics stats = statsMap.computeIfAbsent(statsKey,
k -> new AreaStatistics(areaId, bucket)
);
// 선박 이동 추적
String vesselKey = item.getVesselKey();
Map<String, VesselMovement> areaVessels = vesselTracker.computeIfAbsent(
areaId, k -> new HashMap<>()
);
VesselMovement movement = areaVessels.computeIfAbsent(vesselKey,
k -> {
VesselMovement vm = new VesselMovement();
vm.setVesselKey(vesselKey);
vm.setEnterTime(item.getMessageTime());
vm.setPointCount(0);
vm.setAvgSpeed(BigDecimal.ZERO);
stats.setInCount(stats.getInCount() + 1);
return vm;
}
);
movement.setExitTime(item.getMessageTime());
movement.setPointCount(movement.getPointCount() + 1);
// 평균 속도 계산
if (item.getSog() != null) {
BigDecimal currentTotal = movement.getAvgSpeed()
.multiply(BigDecimal.valueOf(movement.getPointCount() - 1));
movement.setAvgSpeed(
currentTotal.add(item.getSog())
.divide(BigDecimal.valueOf(movement.getPointCount()), 2, BigDecimal.ROUND_HALF_UP)
);
}
// 정류/통과 구분 (10분 이상 체류 정류)
long stayMinutes = ChronoUnit.MINUTES.between(
movement.getEnterTime(), movement.getExitTime()
);
if (stayMinutes > 10) {
stats.getStationaryVessels().put(vesselKey, movement);
} else {
stats.getTransitVessels().put(vesselKey, movement);
}
}
}
// 통계 최종 계산
statsMap.values().forEach(stats -> {
stats.setVesselCount(
stats.getTransitVessels().size() + stats.getStationaryVessels().size()
);
// 평균 속도 계산
List<BigDecimal> allSpeeds = new ArrayList<>();
stats.getTransitVessels().values().stream()
.map(VesselMovement::getAvgSpeed)
.filter(Objects::nonNull)
.forEach(allSpeeds::add);
stats.getStationaryVessels().values().stream()
.map(VesselMovement::getAvgSpeed)
.filter(Objects::nonNull)
.forEach(allSpeeds::add);
if (!allSpeeds.isEmpty()) {
BigDecimal totalSpeed = allSpeeds.stream()
.reduce(BigDecimal.ZERO, BigDecimal::add);
stats.setAvgSog(
totalSpeed.divide(BigDecimal.valueOf(allSpeeds.size()), 2, BigDecimal.ROUND_HALF_UP)
);
}
});
return new ArrayList<>(statsMap.values());
};
}
private void loadAreas() {
log.info("Loading areas from query database...");
DataSourceLogger.logJdbcTemplateInfo("AreaStatisticsProcessor.loadAreas", queryJdbcTemplate);
String sql = "SELECT area_id, area_name, area_type, public.ST_AsText(area_geom) as geom_wkt FROM signal.t_areas";
try {
boolean exists = DataSourceLogger.checkTableExists(
"AreaStatisticsProcessor.loadAreas", queryJdbcTemplate, "signal", "t_areas"
);
if (exists) {
List<AreaInfo> areas = queryJdbcTemplate.query(sql, (rs, rowNum) -> {
AreaInfo area = new AreaInfo();
area.setAreaId(rs.getString("area_id"));
area.setAreaName(rs.getString("area_name"));
area.setAreaType(rs.getString("area_type"));
area.setGeomWkt(rs.getString("geom_wkt"));
// WKT를 JTS Geometry로 변환
try {
Geometry geom = wktReader.read(area.getGeomWkt());
area.setGeometry(geom);
area.setEnvelope(geom.getEnvelopeInternal());
} catch (Exception e) {
log.error("Failed to parse WKT for area {}: {}", area.getAreaId(), e.getMessage());
}
return area;
});
areas.forEach(area -> {
areaCache.put(area.getAreaId(), area);
areaList.add(area);
});
log.info("Successfully loaded {} areas into memory cache", areas.size());
log.info("Area types: {}", areas.stream()
.collect(java.util.stream.Collectors.groupingBy(
AreaInfo::getAreaType,
java.util.stream.Collectors.counting()
)));
} else {
log.error("Cannot load areas - table signal.t_areas does not exist!");
}
} catch (Exception e) {
log.error("Failed to load areas", e);
}
}
/**
* 메모리에서 포인트가 속한 구역 찾기 (DB 쿼리 없음!)
*/
public List<String> findAreasForPointInMemory(double lat, double lon) {
// JTS Point 생성
Point point = geometryFactory.createPoint(new Coordinate(lon, lat));
return areaList.parallelStream()
.filter(area -> area.getGeometry() != null)
.filter(area -> area.getEnvelope().contains(lon, lat))
.filter(area -> {
try {
return area.getGeometry().contains(point);
} catch (Exception e) {
return false;
}
})
.map(AreaInfo::getAreaId)
.collect(Collectors.toList());
// List<String> areaIds = new ArrayList<>();
// // 모든 구역에 대해 contains 검사
// for (AreaInfo area : areaList) {
// if (area.getGeometry() == null) {
// continue;
// }
//
// // 1. Envelope(Bounding Box) 빠른 필터링
// if (!area.getEnvelope().contains(lon, lat)) {
// continue;
// }
//
// // 2. 정확한 contains 검사
// try {
// if (area.getGeometry().contains(point)) {
// areaIds.add(area.getAreaId());
// }
// } catch (Exception e) {
// log.debug("Error checking contains for area {}: {}", area.getAreaId(), e.getMessage());
// }
// }
//
// return areaIds;
}
/**
* 캐시 상태 조회 (디버깅/모니터링용)
*/
public Map<String, Object> getCacheStats() {
Map<String, Object> stats = new HashMap<>();
stats.put("loadedAreas", areaList.size());
stats.put("areaTypes", areaList.stream()
.collect(java.util.stream.Collectors.groupingBy(
AreaInfo::getAreaType,
java.util.stream.Collectors.counting()
)));
return stats;
}
}

파일 보기

@ -46,8 +46,8 @@ public abstract class BaseTrackProcessorWithAbnormalDetection implements ItemPro
AbnormalDetectionResult result = abnormalTrackDetector.detectBucketTransitionOnly(track, previousTrack);
if (result.hasAbnormalities()) {
log.debug("Abnormal track detected for vessel {}/{} at {}: {}",
track.getSigSrcCd(), track.getTargetId(), track.getTimeBucket(),
log.debug("Abnormal track detected for vessel {} at {}: {}",
track.getMmsi(), track.getTimeBucket(),
result.getAbnormalSegments().size());
}
@ -60,12 +60,11 @@ public abstract class BaseTrackProcessorWithAbnormalDetection implements ItemPro
protected VesselTrack getPreviousBucketLastTrack(VesselTrack.VesselKey vesselKey) {
try {
String sql = """
SELECT sig_src_cd, target_id, time_bucket,
SELECT mmsi, time_bucket,
end_position,
public.ST_AsText(public.ST_LineSubstring(track_geom, 0.9, 1.0)) as last_segment
FROM %s
WHERE sig_src_cd = ?
AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ?
AND time_bucket < ?
ORDER BY time_bucket DESC
@ -83,14 +82,13 @@ public abstract class BaseTrackProcessorWithAbnormalDetection implements ItemPro
return jdbcTemplate.queryForObject(sql,
(rs, rowNum) -> {
return VesselTrack.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(rs.getTimestamp("time_bucket").toLocalDateTime())
.trackGeom(rs.getString("last_segment"))
.endPosition(parseEndPosition(rs.getString("end_position")))
.build();
},
vesselKey.getSigSrcCd(), vesselKey.getTargetId(), previousBucketTimestamp, currentBucketTimestamp
vesselKey.getMmsi(), previousBucketTimestamp, currentBucketTimestamp
);
} catch (Exception e) {
log.debug("No previous bucket track found for vessel {}", vesselKey);

파일 보기

@ -39,8 +39,7 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_hourly
WHERE sig_src_cd = ?
AND target_id = ?
WHERE mmsi = ?
AND time_bucket >= ?
AND time_bucket < ?
AND track_geom IS NOT NULL
@ -49,28 +48,26 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
mmsi,
string_agg(
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
GROUP BY mmsi
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
mc.mmsi,
TO_TIMESTAMP(?, 'YYYY-MM-DD HH24:MI:SS') as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')') as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
(SELECT MAX(max_speed) FROM ordered_tracks WHERE mmsi = mc.mmsi) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE mmsi = mc.mmsi) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE mmsi = mc.mmsi) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE mmsi = mc.mmsi) as end_time,
(SELECT start_position FROM ordered_tracks WHERE mmsi = mc.mmsi ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE mmsi = mc.mmsi ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
@ -89,8 +86,7 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
mmsi,
time_bucket,
merged_geom,
total_distance,
@ -112,13 +108,12 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
LocalDateTime startTime = dayBucket;
LocalDateTime endTime = dayBucket.plusDays(1);
// Convert to java.sql.Timestamp for proper PostgreSQL type handling
Timestamp startTimestamp = Timestamp.valueOf(startTime);
Timestamp endTimestamp = Timestamp.valueOf(endTime);
Timestamp dayBucketTimestamp = Timestamp.valueOf(dayBucket);
log.debug("DailyTrackProcessor params - sig_src_cd: {}, target_id: {}, startTime: {}, endTime: {}, dayBucket: {}",
vesselKey.getSigSrcCd(), vesselKey.getTargetId(), startTimestamp, endTimestamp, dayBucketTimestamp);
log.debug("DailyTrackProcessor params - mmsi: {}, startTime: {}, endTime: {}, dayBucket: {}",
vesselKey.getMmsi(), startTimestamp, endTimestamp, dayBucketTimestamp);
try {
return jdbcTemplate.queryForObject(sql,
@ -129,22 +124,21 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
throw new RuntimeException("Failed to build daily track", e);
}
},
vesselKey.getSigSrcCd(), vesselKey.getTargetId(),
vesselKey.getMmsi(),
startTimestamp, endTimestamp, dayBucketTimestamp
);
} catch (org.springframework.dao.EmptyResultDataAccessException e) {
log.warn("No hourly data found for vessel {} in time range {}-{}, skipping daily aggregation",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), startTimestamp, endTimestamp);
vesselKey.getMmsi(), startTimestamp, endTimestamp);
return null;
} catch (Exception e) {
log.error("Failed to process daily track for vessel {}: {}",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), e.getMessage(), e);
vesselKey.getMmsi(), e.getMessage(), e);
return null;
}
}
private VesselTrack buildDailyTrack(ResultSet rs, LocalDateTime dayBucket) throws Exception {
// Start/End position 추출
VesselTrack.TrackPosition startPos = null;
VesselTrack.TrackPosition endPos = null;
@ -154,30 +148,23 @@ public class DailyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey,
if (startPosJson != null) {
startPos = parseTrackPosition(startPosJson);
}
if (endPosJson != null) {
endPos = parseTrackPosition(endPosJson);
}
// M값은 이미 SQL에서 재계산됨
String dailyLineStringM = rs.getString("geom_text");
// 일별 궤적 간소화 (20m 이내 생략, 최대 30분 간격)
String simplifiedLineStringM = TrackSimplificationUtils.simplifyDailyTrack(dailyLineStringM);
// 간소화 통계 로깅
if (!dailyLineStringM.equals(simplifiedLineStringM)) {
TrackSimplificationUtils.SimplificationStats stats =
TrackSimplificationUtils.getSimplificationStats(dailyLineStringM, simplifiedLineStringM);
log.debug("일별 궤적 간소화 - vessel: {}/{}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("sig_src_cd"), rs.getString("target_id"),
log.debug("일별 궤적 간소화 - vessel: {}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("mmsi"),
stats.originalPoints, stats.simplifiedPoints, (int)stats.reductionRate);
}
// track_geom만 사용
return VesselTrack.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.mmsi(rs.getString("mmsi"))
.timeBucket(dayBucket)
.trackGeom(simplifiedLineStringM)
.distanceNm(rs.getBigDecimal("total_distance"))

파일 보기

@ -0,0 +1,293 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.batch.processor.AbnormalTrackDetector.AbnormalDetectionResult;
import gc.mda.signal_batch.domain.vessel.model.VesselTrack;
import gc.mda.signal_batch.global.util.LineStringMUtils;
import gc.mda.signal_batch.global.util.TrackSimplificationUtils;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.StepExecutionListener;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.jdbc.core.JdbcTemplate;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* 인메모리 기반 Hourly Track 병합 프로세서
*
* 5분 트랙 리스트를 받아:
* 1. WKT 좌표 연결 (Java String)
* 2. 통계 집계 (distance, speed, pointCount)
* 3. 간소화 (TrackSimplificationUtils)
* 4. 비정상 검출 (이전 버킷 1회 bulk prefetch)
*
* N+1 SQL 제거 DB 쿼리 최대 1회 (비정상 검출용 이전 버킷 prefetch)
*/
@Slf4j
public class HourlyTrackMergeProcessor
implements ItemProcessor<List<VesselTrack>, AbnormalDetectionResult>, StepExecutionListener {
private static final Pattern WKT_COORDS_PATTERN = Pattern.compile("LINESTRING M\\((.+)\\)");
private static final DateTimeFormatter TIMESTAMP_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
private final AbnormalTrackDetector abnormalTrackDetector;
private final JdbcTemplate queryJdbcTemplate;
private final LocalDateTime hourBucket;
// Lazy-init: 이전 버킷 데이터 1회 bulk prefetch
private Map<String, VesselTrack> previousBucketCache;
private boolean previousBucketLoaded = false;
// Step 레벨 집계 카운터
private int totalProcessed = 0;
private int mergeFailCount = 0;
private int simplifiedCount = 0;
private int abnormalCount = 0;
private int avgSpeedFailCount = 0;
public HourlyTrackMergeProcessor(
AbnormalTrackDetector abnormalTrackDetector,
JdbcTemplate queryJdbcTemplate,
LocalDateTime hourBucket) {
this.abnormalTrackDetector = abnormalTrackDetector;
this.queryJdbcTemplate = queryJdbcTemplate;
this.hourBucket = hourBucket;
}
@Override
public AbnormalDetectionResult process(List<VesselTrack> fiveMinTracks) throws Exception {
if (fiveMinTracks == null || fiveMinTracks.isEmpty()) {
return null;
}
String mmsi = fiveMinTracks.get(0).getMmsi();
totalProcessed++;
// Step 1: WKT 좌표 병합
String mergedWkt = mergeTrackGeometries(fiveMinTracks);
if (mergedWkt == null) {
mergeFailCount++;
return null;
}
// Step 2: 통계 집계
BigDecimal totalDistance = BigDecimal.ZERO;
BigDecimal maxSpeed = BigDecimal.ZERO;
int totalPoints = 0;
for (VesselTrack track : fiveMinTracks) {
if (track.getDistanceNm() != null) {
totalDistance = totalDistance.add(track.getDistanceNm());
}
if (track.getMaxSpeed() != null && track.getMaxSpeed().compareTo(maxSpeed) > 0) {
maxSpeed = track.getMaxSpeed();
}
if (track.getPointCount() != null) {
totalPoints += track.getPointCount();
}
}
// avgSpeed: M값 기반 시간 차이로 계산
BigDecimal avgSpeed = calculateAvgSpeed(mergedWkt, totalDistance);
VesselTrack.TrackPosition startPos = fiveMinTracks.get(0).getStartPosition();
VesselTrack.TrackPosition endPos = fiveMinTracks.get(fiveMinTracks.size() - 1).getEndPosition();
// Step 3: 간소화
String simplifiedWkt = TrackSimplificationUtils.simplifyHourlyTrack(mergedWkt);
int simplifiedPoints = countWktPoints(simplifiedWkt);
if (!mergedWkt.equals(simplifiedWkt)) {
simplifiedCount++;
}
VesselTrack hourlyTrack = VesselTrack.builder()
.mmsi(mmsi)
.timeBucket(hourBucket)
.trackGeom(simplifiedWkt)
.distanceNm(totalDistance)
.avgSpeed(avgSpeed)
.maxSpeed(maxSpeed)
.pointCount(simplifiedPoints > 0 ? simplifiedPoints : totalPoints)
.startPosition(startPos)
.endPosition(endPos)
.build();
// Step 4: 비정상 검출 (lazy-init, 1회 bulk prefetch)
if (!previousBucketLoaded) {
previousBucketCache = bulkFetchPreviousBucketTracks();
previousBucketLoaded = true;
}
VesselTrack prevTrack = previousBucketCache.get(mmsi);
AbnormalDetectionResult result = abnormalTrackDetector.detectBucketTransitionOnly(hourlyTrack, prevTrack);
if (result.hasAbnormalities()) {
abnormalCount++;
}
return result;
}
/**
* 5분 트랙들의 WKT 좌표를 하나로 연결
*/
private String mergeTrackGeometries(List<VesselTrack> tracks) {
StringBuilder allCoords = new StringBuilder();
for (VesselTrack track : tracks) {
String wkt = track.getTrackGeom();
if (wkt == null || wkt.isEmpty()) continue;
Matcher matcher = WKT_COORDS_PATTERN.matcher(wkt);
if (matcher.find()) {
String coords = matcher.group(1);
if (!coords.isBlank()) {
if (allCoords.length() > 0) {
allCoords.append(", ");
}
allCoords.append(coords);
}
}
}
if (allCoords.length() == 0) {
return null;
}
return "LINESTRING M(" + allCoords + ")";
}
/**
* M값(Unix timestamp) 기반 평균 속도 계산
*/
private BigDecimal calculateAvgSpeed(String wkt, BigDecimal totalDistance) {
try {
Matcher matcher = WKT_COORDS_PATTERN.matcher(wkt);
if (!matcher.find()) return BigDecimal.ZERO;
String coords = matcher.group(1);
String[] points = coords.split(",");
if (points.length < 2) return BigDecimal.ZERO;
// 번째 포인트의 M값
String[] firstParts = points[0].trim().split("\\s+");
double firstM = firstParts.length >= 3 ? Double.parseDouble(firstParts[2]) : 0;
// 마지막 포인트의 M값
String[] lastParts = points[points.length - 1].trim().split("\\s+");
double lastM = lastParts.length >= 3 ? Double.parseDouble(lastParts[2]) : 0;
double timeDiffSeconds = lastM - firstM;
if (timeDiffSeconds <= 0) return BigDecimal.ZERO;
double timeDiffHours = timeDiffSeconds / 3600.0;
double avgSpeedVal = totalDistance.doubleValue() / timeDiffHours;
// 비현실적 속도 제한
avgSpeedVal = Math.min(avgSpeedVal, 9999.99);
return BigDecimal.valueOf(avgSpeedVal).setScale(2, RoundingMode.HALF_UP);
} catch (Exception e) {
avgSpeedFailCount++;
return BigDecimal.ZERO;
}
}
@Override
public ExitStatus afterStep(StepExecution stepExecution) {
log.debug("Hourly 병합 처리 집계 — 총: {}, 병합실패: {}, 간소화: {}, 비정상: {}, avgSpeed실패: {}",
totalProcessed, mergeFailCount, simplifiedCount, abnormalCount, avgSpeedFailCount);
return null;
}
private int countWktPoints(String wkt) {
if (wkt == null || !wkt.startsWith("LINESTRING M")) return 0;
try {
String coords = wkt.substring("LINESTRING M(".length(), wkt.length() - 1);
return coords.split(",").length;
} catch (Exception e) {
return 0;
}
}
/**
* 비정상 검출용 이전 1시간의 MMSI별 마지막 5분 트랙 bulk prefetch
*/
private Map<String, VesselTrack> bulkFetchPreviousBucketTracks() {
LocalDateTime prevStart = hourBucket.minusHours(1);
LocalDateTime prevEnd = hourBucket;
String sql = """
SELECT DISTINCT ON (mmsi)
mmsi, time_bucket, end_position,
public.ST_AsText(public.ST_LineSubstring(track_geom, 0.9, 1.0)) as last_segment
FROM signal.t_vessel_tracks_5min
WHERE time_bucket >= ? AND time_bucket < ?
AND track_geom IS NOT NULL
ORDER BY mmsi, time_bucket DESC
""";
Map<String, VesselTrack> result = new HashMap<>();
int[] parseFailCount = {0};
try {
queryJdbcTemplate.query(sql,
ps -> {
ps.setTimestamp(1, Timestamp.valueOf(prevStart));
ps.setTimestamp(2, Timestamp.valueOf(prevEnd));
},
rs -> {
String mmsi = rs.getString("mmsi");
VesselTrack.TrackPosition endPos = parseEndPosition(rs.getString("end_position"));
if (endPos == null && rs.getString("end_position") != null) {
parseFailCount[0]++;
}
VesselTrack track = VesselTrack.builder()
.mmsi(mmsi)
.timeBucket(rs.getTimestamp("time_bucket").toLocalDateTime())
.trackGeom(rs.getString("last_segment"))
.endPosition(endPos)
.build();
result.put(mmsi, track);
});
log.info("이전 버킷 트랙 prefetch 완료: {} 선박 (기간: {} ~ {})",
result.size(), prevStart, prevEnd);
if (parseFailCount[0] > 0) {
log.debug("end_position 파싱 실패: {} 건", parseFailCount[0]);
}
} catch (Exception e) {
log.warn("이전 버킷 트랙 prefetch 실패 (첫 실행일 수 있음): {}", e.getMessage());
}
return result;
}
private VesselTrack.TrackPosition parseEndPosition(String json) {
if (json == null) return null;
try {
String lat = LineStringMUtils.extractJsonValue(json, "lat");
String lon = LineStringMUtils.extractJsonValue(json, "lon");
String time = LineStringMUtils.extractJsonValue(json, "time");
String sog = LineStringMUtils.extractJsonValue(json, "sog");
return VesselTrack.TrackPosition.builder()
.lat(lat != null ? Double.parseDouble(lat) : null)
.lon(lon != null ? Double.parseDouble(lon) : null)
.time(time != null ? LocalDateTime.parse(time, TIMESTAMP_FORMATTER) : null)
.sog(sog != null ? new BigDecimal(sog) : null)
.build();
} catch (Exception e) {
return null;
}
}
}

파일 보기

@ -1,207 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselTrack;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jdbc.core.JdbcTemplate;
import java.math.BigDecimal;
import java.sql.ResultSet;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import gc.mda.signal_batch.global.util.LineStringMUtils;
import gc.mda.signal_batch.global.util.TrackSimplificationUtils;
import javax.sql.DataSource;
@Slf4j
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class HourlyTrackProcessor implements ItemProcessor<VesselTrack.VesselKey, VesselTrack> {
private final DataSource queryDataSource;
private final JdbcTemplate jdbcTemplate;
private static final DateTimeFormatter TIMESTAMP_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
@Override
public VesselTrack process(VesselTrack.VesselKey vesselKey) throws Exception {
LocalDateTime hourBucket = vesselKey.getTimeBucket()
.withMinute(0)
.withSecond(0)
.withNano(0);
String sql = """
WITH ordered_tracks AS (
SELECT *
FROM signal.t_vessel_tracks_5min
WHERE sig_src_cd = ?
AND target_id = ?
AND time_bucket >= ?
AND time_bucket < ?
AND track_geom IS NOT NULL
AND public.ST_NPoints(track_geom) > 0
ORDER BY time_bucket
),
merged_coords AS (
SELECT
sig_src_cd,
target_id,
string_agg(
substring(public.ST_AsText(track_geom) from 'M \\((.+)\\)'),
','
ORDER BY time_bucket
) FILTER (WHERE track_geom IS NOT NULL) as all_coords
FROM ordered_tracks
GROUP BY sig_src_cd, target_id
),
merged_tracks AS (
SELECT
mc.sig_src_cd,
mc.target_id,
TO_TIMESTAMP(?, 'YYYY-MM-DD HH24:MI:SS') as time_bucket,
public.ST_GeomFromText('LINESTRING M(' || mc.all_coords || ')') as merged_geom,
(SELECT MAX(max_speed) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as max_speed,
(SELECT SUM(point_count) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as total_points,
(SELECT MIN(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as start_time,
(SELECT MAX(time_bucket) FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id) as end_time,
(SELECT start_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket LIMIT 1) as start_pos,
(SELECT end_position FROM ordered_tracks WHERE sig_src_cd = mc.sig_src_cd AND target_id = mc.target_id ORDER BY time_bucket DESC LIMIT 1) as end_pos
FROM merged_coords mc
),
calculated_tracks AS (
SELECT
*,
public.ST_Length(merged_geom::geography) / 1852.0 as total_distance,
CASE
WHEN public.ST_NPoints(merged_geom) > 0 THEN
public.ST_M(public.ST_PointN(merged_geom, public.ST_NPoints(merged_geom))) -
public.ST_M(public.ST_PointN(merged_geom, 1))
ELSE
EXTRACT(EPOCH FROM
TO_TIMESTAMP(end_pos->>'time', 'YYYY-MM-DD HH24:MI:SS') - TO_TIMESTAMP(start_pos->>'time', 'YYYY-MM-DD HH24:MI:SS')
)
END as time_diff_seconds
FROM merged_tracks
)
SELECT
sig_src_cd,
target_id,
time_bucket,
merged_geom,
total_distance,
CASE
WHEN time_diff_seconds > 0 THEN
CAST(LEAST((total_distance / (time_diff_seconds / 3600.0)), 9999.99) AS numeric(6,2))
ELSE 0
END as avg_speed,
max_speed,
total_points,
start_time,
end_time,
start_pos,
end_pos,
public.ST_AsText(merged_geom) as geom_text
FROM calculated_tracks
""";
LocalDateTime startTime = hourBucket;
LocalDateTime endTime = hourBucket.plusHours(1);
// Convert to java.sql.Timestamp for proper PostgreSQL type handling
Timestamp startTimestamp = Timestamp.valueOf(startTime);
Timestamp endTimestamp = Timestamp.valueOf(endTime);
Timestamp hourBucketTimestamp = Timestamp.valueOf(hourBucket);
log.debug("HourlyTrackProcessor params - sig_src_cd: {}, target_id: {}, startTime: {}, endTime: {}, hourBucket: {}",
vesselKey.getSigSrcCd(), vesselKey.getTargetId(), startTimestamp, endTimestamp, hourBucketTimestamp);
try {
return jdbcTemplate.queryForObject(sql,
(rs, rowNum) -> {
try {
return buildHourlyTrack(rs, hourBucket);
} catch (Exception e) {
throw new RuntimeException("Failed to build hourly track", e);
}
},
vesselKey.getSigSrcCd(), vesselKey.getTargetId(),
startTimestamp, endTimestamp, hourBucketTimestamp
);
} catch (org.springframework.dao.EmptyResultDataAccessException e) {
log.warn("No 5min data found for vessel {} in time range {}-{}, skipping hourly aggregation",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), startTimestamp, endTimestamp);
return null;
} catch (Exception e) {
log.error("Failed to process hourly track for vessel {}: {}",
vesselKey.getSigSrcCd() + "_" + vesselKey.getTargetId(), e.getMessage(), e);
return null;
}
}
private VesselTrack buildHourlyTrack(ResultSet rs, LocalDateTime hourBucket) throws Exception {
// Start/End position 추출
VesselTrack.TrackPosition startPos = null;
VesselTrack.TrackPosition endPos = null;
String startPosJson = rs.getString("start_pos");
String endPosJson = rs.getString("end_pos");
if (startPosJson != null) {
startPos = parseTrackPosition(startPosJson);
}
if (endPosJson != null) {
endPos = parseTrackPosition(endPosJson);
}
// M값은 이미 SQL에서 재계산됨
String hourlyLineStringM = rs.getString("geom_text");
// 이동이 거의 없는 포인트 간소화 (10m 이내 생략, 최대 10분 간격)
String simplifiedLineStringM = TrackSimplificationUtils.simplifyHourlyTrack(hourlyLineStringM);
// 간소화 통계 로깅
if (!hourlyLineStringM.equals(simplifiedLineStringM)) {
TrackSimplificationUtils.SimplificationStats stats =
TrackSimplificationUtils.getSimplificationStats(hourlyLineStringM, simplifiedLineStringM);
log.debug("시간별 궤적 간소화 - vessel: {}/{}, 원본: {}포인트, 간소화: {}포인트 ({}% 감소)",
rs.getString("sig_src_cd"), rs.getString("target_id"),
stats.originalPoints, stats.simplifiedPoints, (int)stats.reductionRate);
}
// track_geom만 사용
return VesselTrack.builder()
.sigSrcCd(rs.getString("sig_src_cd"))
.targetId(rs.getString("target_id"))
.timeBucket(hourBucket)
.trackGeom(simplifiedLineStringM)
.distanceNm(rs.getBigDecimal("total_distance"))
.avgSpeed(rs.getBigDecimal("avg_speed"))
.maxSpeed(rs.getBigDecimal("max_speed"))
.pointCount(rs.getInt("total_points"))
.startPosition(startPos)
.endPosition(endPos)
.build();
}
private VesselTrack.TrackPosition parseTrackPosition(String json) {
try {
String latStr = LineStringMUtils.extractJsonValue(json, "lat");
String lonStr = LineStringMUtils.extractJsonValue(json, "lon");
String timeStr = LineStringMUtils.extractJsonValue(json, "time");
String sogStr = LineStringMUtils.extractJsonValue(json, "sog");
return VesselTrack.TrackPosition.builder()
.lat(latStr != null ? Double.parseDouble(latStr) : null)
.lon(lonStr != null ? Double.parseDouble(lonStr) : null)
.time(timeStr != null ? LocalDateTime.parse(timeStr, TIMESTAMP_FORMATTER) : null)
.sog(sogStr != null ? new BigDecimal(sogStr) : null)
.build();
} catch (Exception e) {
log.error("Failed to parse track position: {}", json, e);
return null;
}
}
}

파일 보기

@ -1,38 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselTrack;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import javax.sql.DataSource;
import java.time.LocalDateTime;
/**
* 시간별 궤적 프로세서 - 비정상 궤적 검출 기능 포함
*/
@Slf4j
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class HourlyTrackProcessorWithAbnormalDetection extends BaseTrackProcessorWithAbnormalDetection {
public HourlyTrackProcessorWithAbnormalDetection(
ItemProcessor<VesselTrack.VesselKey, VesselTrack> hourlyTrackProcessor,
AbnormalTrackDetector abnormalTrackDetector,
DataSource queryDataSource) {
super(hourlyTrackProcessor, abnormalTrackDetector, queryDataSource);
}
@Override
protected String getPreviousTrackTableName() {
return "signal.t_vessel_tracks_5min";
}
@Override
protected LocalDateTime getNormalizedBucket(LocalDateTime timeBucket) {
return timeBucket.withMinute(0).withSecond(0).withNano(0);
}
@Override
protected LocalDateTime getPreviousBucket(LocalDateTime currentBucket) {
return currentBucket.minusHours(1);
}
}

파일 보기

@ -1,60 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.domain.vessel.model.VesselLatestPosition;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.stereotype.Component;
import java.time.LocalDateTime;
import java.util.concurrent.ConcurrentHashMap;
@Slf4j
@Component
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
public class LatestPositionProcessor {
@StepScope
public ItemProcessor<VesselData, VesselLatestPosition> processor() {
// 청크 내에서 최신 데이터만 유지
ConcurrentHashMap<String, VesselLatestPosition> latestMap = new ConcurrentHashMap<>();
return item -> {
if (!item.isValidPosition()) {
log.debug("Invalid position for vessel: {}", item.getVesselKey());
return null;
}
String key = item.getVesselKey();
VesselLatestPosition current = VesselLatestPosition.fromVesselData(item);
VesselLatestPosition existing = latestMap.get(key);
if (existing == null || current.getLastUpdate().isAfter(existing.getLastUpdate())) {
latestMap.put(key, current);
return current;
}
return null;
};
}
@StepScope
public ItemProcessor<VesselData, VesselLatestPosition> filteringProcessor(
LocalDateTime cutoffTime) {
return item -> {
// 특정 시간 이후 데이터만 처리
if (item.getMessageTime().isBefore(cutoffTime)) {
return null;
}
if (!item.isValidPosition()) {
return null;
}
return VesselLatestPosition.fromVesselData(item);
};
}
}

파일 보기

@ -1,291 +0,0 @@
package gc.mda.signal_batch.batch.processor;
import gc.mda.signal_batch.domain.gis.model.TileStatistics;
import gc.mda.signal_batch.domain.vessel.model.VesselData;
import gc.mda.signal_batch.global.util.HaeguGeoUtils;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
@Slf4j
@Configuration
@ConditionalOnProperty(name = "vessel.batch.scheduler.enabled", havingValue = "true", matchIfMissing = true)
@RequiredArgsConstructor
public class TileAggregationProcessor {
private final HaeguGeoUtils geoUtils;
/**
* 타일 레벨과 시간 버킷에 따른 배치 프로세서 생성
*/
public ItemProcessor<List<VesselData>, List<TileStatistics>> batchProcessor(
int tileLevel, int timeBucketMinutes) {
return items -> {
if (items == null || items.isEmpty()) {
return null;
}
Map<String, TileStatistics> tileMap = new HashMap<>();
for (VesselData item : items) {
if (!item.isValidPosition()) {
continue;
}
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / timeBucketMinutes) * timeBucketMinutes);
// 요청된 레벨에 따라 처리
if (tileLevel >= 0) {
// Level 0 (대해구) 처리
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String haeguKey = level0Info.tileId + "_" + bucket.toString();
TileStatistics haeguStats = tileMap.computeIfAbsent(haeguKey,
k -> TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
haeguStats.addVesselData(item);
}
}
if (tileLevel >= 1) {
// Level 1 (소해구) 처리
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String subKey = level1Info.tileId + "_" + bucket.toString();
TileStatistics subStats = tileMap.computeIfAbsent(subKey,
k -> TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
subStats.addVesselData(item);
}
}
}
// 타일별로 밀도 계산
tileMap.values().forEach(this::calculateDensity);
return new ArrayList<>(tileMap.values());
};
}
@Bean
@StepScope
public ItemProcessor<List<VesselData>, List<TileStatistics>> tileAggregationBatchProcessor(
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
return items -> {
if (items == null || items.isEmpty()) {
return null;
}
Map<String, TileStatistics> tileMap = new HashMap<>();
for (VesselData item : items) {
if (!item.isValidPosition()) {
continue;
}
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketMinutes) * bucketMinutes);
// 1. 대해구 레벨(Level 0) 처리
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String haeguKey = level0Info.tileId + "_" + bucket.toString();
TileStatistics haeguStats = tileMap.computeIfAbsent(haeguKey,
k -> TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0) // 대해구는 레벨 0
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
haeguStats.addVesselData(item);
}
// 2. 소해구 레벨(Level 1) 처리
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String subKey = level1Info.tileId + "_" + bucket.toString();
TileStatistics subStats = tileMap.computeIfAbsent(subKey,
k -> TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1) // 소해구는 레벨 1
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
subStats.addVesselData(item);
}
}
// 타일별로 밀도 계산
tileMap.values().forEach(stats -> {
calculateDensity(stats);
});
return new ArrayList<>(tileMap.values());
};
}
@Bean
@StepScope
public ItemProcessor<VesselData, List<TileStatistics>> singleItemProcessor(
@Value("#{jobParameters['tileLevel']}") Integer tileLevel,
@Value("#{jobParameters['timeBucketMinutes']}") Integer timeBucketMinutes) {
final int bucketMinutes = (timeBucketMinutes != null) ? timeBucketMinutes : 5;
final int maxLevel = (tileLevel != null) ? tileLevel : 1;
Map<String, TileStatistics> accumulator = new HashMap<>();
return item -> {
if (!item.isValidPosition()) {
return null;
}
LocalDateTime bucket = item.getMessageTime()
.truncatedTo(ChronoUnit.MINUTES)
.withMinute((item.getMessageTime().getMinute() / bucketMinutes) * bucketMinutes);
List<TileStatistics> result = new ArrayList<>();
// Level 0 (대해구)
if (maxLevel >= 0) {
HaeguGeoUtils.HaeguTileInfo level0Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 0
);
if (level0Info != null) {
String key = level0Info.tileId + "_" + bucket.toString();
TileStatistics stats = accumulator.computeIfAbsent(key,
k -> TileStatistics.builder()
.tileId(level0Info.tileId)
.tileLevel(0)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
stats.addVesselData(item);
// 일정 개수가 쌓이면 출력
if (stats.getTotalPoints() % 1000 == 0) {
calculateDensity(stats);
result.add(stats);
}
}
}
// Level 1 (소해구)
if (maxLevel >= 1) {
HaeguGeoUtils.HaeguTileInfo level1Info = geoUtils.getHaeguTileInfo(
item.getLat(), item.getLon(), 1
);
if (level1Info != null && level1Info.sohaeguNo != null) {
String key = level1Info.tileId + "_" + bucket.toString();
TileStatistics stats = accumulator.computeIfAbsent(key,
k -> TileStatistics.builder()
.tileId(level1Info.tileId)
.tileLevel(1)
.timeBucket(bucket)
.uniqueVessels(new HashMap<>())
.totalPoints(0L)
.avgSog(BigDecimal.ZERO)
.maxSog(BigDecimal.ZERO)
.build()
);
stats.addVesselData(item);
// 일정 개수가 쌓이면 출력
if (stats.getTotalPoints() % 1000 == 0) {
calculateDensity(stats);
result.add(stats);
}
}
}
return result.isEmpty() ? null : result;
};
}
/**
* 타일의 선박 밀도 계산
*/
private void calculateDensity(TileStatistics stats) {
if (stats.getVesselCount() == null || stats.getVesselCount() == 0) {
stats.setVesselDensity(BigDecimal.ZERO);
return;
}
// 타일 면적 가져오기 (km²)
double tileArea = geoUtils.getTileArea(stats.getTileId());
if (tileArea > 0) {
// 밀도 = 선박 / 면적
BigDecimal density = BigDecimal.valueOf(stats.getVesselCount())
.divide(BigDecimal.valueOf(tileArea), 6, RoundingMode.HALF_UP);
stats.setVesselDensity(density);
} else {
stats.setVesselDensity(BigDecimal.ZERO);
}
}
}

파일 보기

@ -76,8 +76,7 @@ public class VesselTrackProcessor implements ItemProcessor<List<VesselData>, Lis
.collect(Collectors.toList());
VesselTrack track = VesselTrack.builder()
.sigSrcCd(first.getSigSrcCd())
.targetId(first.getTargetId())
.mmsi(first.getMmsi())
.timeBucket(timeBucket)
.trackPoints(trackPoints)
.pointCount(trackPoints.size())

파일 보기

@ -0,0 +1,246 @@
package gc.mda.signal_batch.batch.reader;
import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
import com.github.benmanes.caffeine.cache.RemovalCause;
import com.github.benmanes.caffeine.cache.stats.CacheStats;
import gc.mda.signal_batch.domain.vessel.model.AisTargetEntity;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import java.time.OffsetDateTime;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
/**
* AIS Target Caffeine 캐시 매니저
*
* key: MMSI (String) 문자 혼합 MMSI 장비 지원
* value: AisTargetEntity
*
* 동작:
* - 1분 주기 API Reader Writer에서 캐시 업데이트
* - 5분 집계 Job에서 캐시 스냅샷 추출 VesselData 변환
* - 기존 데이터보다 최신(messageTimestamp 기준) 경우에만 업데이트
*
* TTL (프로파일별):
* - local: 5분, dev: 60분, prod/prod-mpr: 120분
*/
@Slf4j
@Component
public class AisTargetCacheManager {
private Cache<String, AisTargetEntity> cache;
/**
* 트랙 누적 버퍼 1분 API 호출마다 위치를 append, 5분 집계 drain
* AtomicReference swap으로 drain lock-free 처리
*/
private final AtomicReference<ConcurrentHashMap<String, List<AisTargetEntity>>> trackBufferRef =
new AtomicReference<>(new ConcurrentHashMap<>());
@Value("${app.cache.ais-target.ttl-minutes:120}")
private long ttlMinutes;
@Value("${app.cache.ais-target.max-size:300000}")
private int maxSize;
@PostConstruct
public void init() {
this.cache = Caffeine.newBuilder()
.maximumSize(maxSize)
.expireAfterWrite(ttlMinutes, TimeUnit.MINUTES)
.recordStats()
.removalListener((String key, AisTargetEntity value, RemovalCause cause) -> {
if (cause != RemovalCause.REPLACED) {
log.trace("캐시 제거 - MMSI: {}, 원인: {}", key, cause);
}
})
.build();
log.info("AIS Target Caffeine 캐시 초기화 - TTL: {}분, 최대 크기: {}", ttlMinutes, maxSize);
}
// ==================== 단건 조회/업데이트 ====================
public Optional<AisTargetEntity> get(String mmsi) {
return Optional.ofNullable(cache.getIfPresent(mmsi));
}
public void put(AisTargetEntity entity) {
if (entity == null || entity.getMmsi() == null) {
return;
}
String mmsi = entity.getMmsi();
AisTargetEntity existing = cache.getIfPresent(mmsi);
if (existing == null || isNewer(entity, existing)) {
cache.put(mmsi, entity);
}
}
// ==================== 배치 조회/업데이트 ====================
public Map<String, AisTargetEntity> getAll(List<String> mmsiList) {
if (mmsiList == null || mmsiList.isEmpty()) {
return Collections.emptyMap();
}
return cache.getAllPresent(mmsiList);
}
/**
* 여러 데이터 일괄 저장/업데이트
* 기존 데이터보다 최신인 경우에만 업데이트
*/
public void putAll(List<AisTargetEntity> entities) {
if (entities == null || entities.isEmpty()) {
return;
}
int updated = 0;
int skipped = 0;
for (AisTargetEntity entity : entities) {
if (entity == null || entity.getMmsi() == null) {
continue;
}
AisTargetEntity existing = cache.getIfPresent(entity.getMmsi());
if (existing == null || isNewer(entity, existing)) {
cache.put(entity.getMmsi(), entity);
updated++;
} else {
skipped++;
}
}
log.debug("캐시 배치 업데이트 - 입력: {}, 업데이트: {}, 스킵: {}, 현재 크기: {}",
entities.size(), updated, skipped, cache.estimatedSize());
}
// ==================== 캐시 스냅샷 (t_ais_position 동기화용) ====================
/**
* 캐시의 모든 데이터 조회 (AisPositionSyncStep에서 사용)
*/
public Collection<AisTargetEntity> getAllValues() {
return cache.asMap().values();
}
// ==================== 트랙 누적 버퍼 (5분 집계용) ====================
/**
* 1분 API 호출 결과를 트랙 버퍼에 누적
* MMSI별로 위치 이력을 쌓아 5분 집계 LineStringM 생성에 사용
*/
public void appendAllForTrack(List<AisTargetEntity> entities) {
if (entities == null || entities.isEmpty()) {
return;
}
ConcurrentHashMap<String, List<AisTargetEntity>> buffer = trackBufferRef.get();
int appended = 0;
for (AisTargetEntity entity : entities) {
if (entity == null || entity.getMmsi() == null
|| entity.getLat() == null || entity.getLon() == null) {
continue;
}
buffer.computeIfAbsent(entity.getMmsi(),
k -> Collections.synchronizedList(new ArrayList<>())).add(entity);
appended++;
}
log.debug("트랙 버퍼 누적: {} 건 (버퍼 내 선박 수: {})", appended, buffer.size());
}
/**
* 트랙 버퍼를 drain하여 반환하고 버퍼로 교체 (5분 집계 Job에서 호출)
* AtomicReference swap으로 1분 Writer와 lock-free 동시성 보장
*
* @return MMSI별 누적 위치 목록 (보통 MMSI당 ~5개 포인트)
*/
public Map<String, List<AisTargetEntity>> drainTrackBuffer() {
ConcurrentHashMap<String, List<AisTargetEntity>> drained =
trackBufferRef.getAndSet(new ConcurrentHashMap<>());
long totalPoints = drained.values().stream().mapToLong(List::size).sum();
log.info("트랙 버퍼 drain: {} 선박, {} 포인트", drained.size(), totalPoints);
return drained;
}
/**
* 트랙 버퍼 현재 크기 (모니터링용)
*/
public Map<String, Object> getTrackBufferStats() {
ConcurrentHashMap<String, List<AisTargetEntity>> buffer = trackBufferRef.get();
long totalPoints = buffer.values().stream().mapToLong(List::size).sum();
Map<String, Object> stats = new LinkedHashMap<>();
stats.put("vesselCount", buffer.size());
stats.put("totalPoints", totalPoints);
stats.put("avgPointsPerVessel", buffer.isEmpty() ? 0 : String.format("%.1f", (double) totalPoints / buffer.size()));
return stats;
}
// ==================== 캐시 관리 ====================
public void evict(String mmsi) {
cache.invalidate(mmsi);
}
public void clear() {
long size = cache.estimatedSize();
cache.invalidateAll();
log.info("캐시 전체 삭제 - {} 건", size);
}
public long size() {
return cache.estimatedSize();
}
public void cleanup() {
cache.cleanUp();
}
// ==================== 통계 ====================
public Map<String, Object> getStats() {
CacheStats stats = cache.stats();
Map<String, Object> result = new LinkedHashMap<>();
result.put("estimatedSize", cache.estimatedSize());
result.put("maxSize", maxSize);
result.put("ttlMinutes", ttlMinutes);
result.put("hitCount", stats.hitCount());
result.put("missCount", stats.missCount());
result.put("hitRate", String.format("%.2f%%", stats.hitRate() * 100));
result.put("evictionCount", stats.evictionCount());
result.put("utilizationPercent", String.format("%.2f%%", (cache.estimatedSize() * 100.0 / maxSize)));
return result;
}
public CacheStats getCacheStats() {
return cache.stats();
}
// ==================== Private ====================
private boolean isNewer(AisTargetEntity newEntity, AisTargetEntity existing) {
OffsetDateTime newTs = newEntity.getMessageTimestamp();
OffsetDateTime existingTs = existing.getMessageTimestamp();
if (newTs == null) return false;
if (existingTs == null) return true;
return newTs.isAfter(existingTs);
}
}

파일 보기

@ -0,0 +1,86 @@
package gc.mda.signal_batch.batch.reader;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetApiResponse;
import gc.mda.signal_batch.domain.vessel.dto.AisTargetDto;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.item.ItemReader;
import org.springframework.web.reactive.function.client.WebClient;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
/**
* S&P Global AIS API Reader (Spring Batch ItemReader)
*
* API: POST /AisSvc.svc/AIS/GetTargetsEnhanced
* 요청: {"sinceSeconds": "60"}
* 응답: ~33,000건/
*
* 동작:
* - read() 호출 API를 호출하여 전체 데이터를 가져옴
* - 이후 read() 호출마다 건씩 반환 (Spring Batch chunk 처리)
* - 모든 데이터를 반환하면 null을 반환하여 Step 종료
*/
@Slf4j
public class AisTargetDataReader implements ItemReader<AisTargetDto> {
private static final String API_PATH = "/AisSvc.svc/AIS/GetTargetsEnhanced";
private final WebClient webClient;
private final int sinceSeconds;
private Iterator<AisTargetDto> iterator;
private boolean fetched = false;
public AisTargetDataReader(WebClient webClient, int sinceSeconds) {
this.webClient = webClient;
this.sinceSeconds = sinceSeconds;
}
@Override
public AisTargetDto read() {
if (!fetched) {
List<AisTargetDto> data = fetchDataFromApi();
this.iterator = data.iterator();
this.fetched = true;
}
if (iterator != null && iterator.hasNext()) {
return iterator.next();
}
// Step 종료 다음 실행을 위해 상태 리셋
fetched = false;
iterator = null;
return null;
}
private List<AisTargetDto> fetchDataFromApi() {
try {
log.info("[AisTargetDataReader] API 호출 시작: POST {} (sinceSeconds: {})",
API_PATH, sinceSeconds);
AisTargetApiResponse response = webClient.post()
.uri(API_PATH)
.bodyValue(Map.of("sinceSeconds", String.valueOf(sinceSeconds)))
.retrieve()
.bodyToMono(AisTargetApiResponse.class)
.block();
if (response != null && response.getTargetArr() != null) {
List<AisTargetDto> targets = response.getTargetArr();
log.info("[AisTargetDataReader] API 호출 완료: {} 건 조회", targets.size());
return targets;
} else {
log.warn("[AisTargetDataReader] API 응답이 비어있습니다");
return Collections.emptyList();
}
} catch (Exception e) {
log.error("[AisTargetDataReader] API 호출 실패: {}", e.getMessage(), e);
return Collections.emptyList();
}
}
}

Some files were not shown because too many files have changed in this diff Show More