Skip to content

Commit cb20f03

Browse files
committed
first commit
0 parents  commit cb20f03

15 files changed

+119829
-0
lines changed

.github/workflows/docker-build.yml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
name: Build and Push Docker Image
2+
3+
on:
4+
push:
5+
branches: [main]
6+
7+
jobs:
8+
build:
9+
runs-on: ubuntu-latest
10+
11+
steps:
12+
- name: Checkout source code
13+
uses: actions/checkout@v2
14+
15+
- name: Set up Docker Buildx
16+
uses: docker/setup-buildx-action@v2
17+
18+
- name: Log in to DockerHub
19+
run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
20+
21+
- name: Build Docker image
22+
run: docker build -t ${{ secrets.DOCKER_USERNAME }}/phish-api:latest .
23+
24+
- name: Push Docker image
25+
run: docker push ${{ secrets.DOCKER_USERNAME }}/phish-api:latest
26+
27+
- name: Trigger Render Deploy Hook
28+
run: |
29+
curl -X POST ${{ secrets.RENDER_DEPLOY_HOOK }}

.gitignore

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
drive/
2+
.sample_data/
3+
.config/
4+
__pycache__/
5+
*.pyc

Dockerfile

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# 베이스 이미지
2+
FROM python:3.10-slim
3+
4+
# 작업 디렉토리 설정
5+
WORKDIR /app
6+
7+
# gcc 설치 (필요 시)
8+
RUN apt-get update && apt-get install -y --no-install-recommends gcc && rm -rf /var/lib/apt/lists/*
9+
10+
# 의존성 파일 복사 및 설치
11+
COPY requirements.txt .
12+
13+
RUN pip install --no-cache-dir -r requirements.txt
14+
15+
# 소스 코드 전체 복사
16+
COPY . .
17+
18+
# FastAPI 실행
19+
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]

main.py

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
2+
from fastapi import FastAPI, HTTPException, Query
3+
import pickle
4+
import pandas as pd
5+
6+
app = FastAPI(title="상권 분석 예측 API")
7+
8+
# 모델과 데이터 로드 (서버 시작 시 1회)
9+
with open("model.pkl", "rb") as f:
10+
model = pickle.load(f)
11+
12+
merged_model = pd.read_pickle("merged_model.pkl")
13+
14+
# 결과 생성 함수
15+
def generate_result_template(행정동명):
16+
등급_텍스트 = {0: '하', 1: '중', 2: '상'}
17+
등급_info = {
18+
'상': {
19+
'desc': '매우 높음',
20+
'precision': '77%',
21+
'recommendations': ['카페', '헬스장', '미용실']
22+
},
23+
'중': {
24+
'desc': '보통',
25+
'precision': '62%',
26+
'recommendations': ['편의점', '분식집', '세탁소']
27+
},
28+
'하': {
29+
'desc': '낮음',
30+
'precision': '69%',
31+
'recommendations': ['중고매장', 'PC방', '호프집']
32+
}
33+
}
34+
35+
filtered = merged_model[merged_model['행정동_코드_명'] == 행정동명].copy()
36+
if filtered.empty:
37+
return f"❌ '{{}}'에 대한 예측 결과가 없습니다.".format(행정동명)
38+
39+
filtered['예측_등급'] = filtered['예측_등급'].map(등급_텍스트)
40+
result_summary = filtered[['서비스_업종_코드_명', '예측_등급']].drop_duplicates()
41+
top_grade = result_summary['예측_등급'].value_counts().idxmax()
42+
info = 등급_info[top_grade]
43+
reco = info['recommendations']
44+
45+
output = f"""
46+
🔍 '{{행정동명}}' 상권 분석 결과
47+
예측 모델에 따르면 이 지역은 창업 적합도 등급 '{{top_grade}}' ({{info['desc']}})으로 분류되는 업종이 가장 많습니다.
48+
이는 전국 상권 데이터를 바탕으로 유동인구, 매출 흐름, 폐업률 등의 지표를 종합 분석한 결과입니다.
49+
50+
- 모델 전체 정확도는 71%, '{{top_grade}}' 등급 예측의 정밀도는 약 {{info['precision']}}입니다.
51+
- 분석 결과를 토대로, 이 지역에서는 다음과 같은 업종이 특히 적합한 업종군으로 추천됩니다:
52+
53+
✅ 추천 업종 TOP 3
54+
1. {{reco[0]}}
55+
2. {{reco[1]}}
56+
3. {{reco[2]}}
57+
58+
📌 추천 업종은 유사 상권에서 높은 생존율과 매출 흐름을 보인 업종을 기반으로 도출됩니다.
59+
"""
60+
return output
61+
62+
@app.get("/")
63+
def health_check():
64+
return {"status": "ok", "message": "상권 분석 API가 실행 중입니다."}
65+
66+
@app.get("/predict")
67+
def predict(dong_name: str = Query(..., description="예측할 행정동명 입력 예: 역삼1동")):
68+
try:
69+
result_text = generate_result_template(dong_name)
70+
except Exception as e:
71+
raise HTTPException(status_code=500, detail=str(e))
72+
return {"result": result_text}

merged_model.pkl

18.4 MB
Binary file not shown.

model.pkl

1.43 MB
Binary file not shown.

requirements.txt

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
fastapi
2+
uvicorn
3+
numpy
4+
scikit-learn
5+
pandas
6+
lightgbm
7+
pyngrok
8+
requests

sample_data/README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
This directory includes a few sample datasets to get you started.
2+
3+
* `california_housing_data*.csv` is California housing data from the 1990 US
4+
Census; more information is available at:
5+
https://docs.google.com/document/d/e/2PACX-1vRhYtsvc5eOR2FWNCwaBiKL6suIOrxJig8LcSBbmCbyYsayia_DvPOOBlXZ4CAlQ5nlDD8kTaIDRwrN/pub
6+
7+
* `mnist_*.csv` is a small sample of the
8+
[MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is
9+
described at: http://yann.lecun.com/exdb/mnist/
10+
11+
* `anscombe.json` contains a copy of
12+
[Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it
13+
was originally described in
14+
15+
Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American
16+
Statistician. 27 (1): 17-21. JSTOR 2682899.
17+
18+
and our copy was prepared by the
19+
[vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).

sample_data/anscombe.json

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
[
2+
{"Series":"I", "X":10.0, "Y":8.04},
3+
{"Series":"I", "X":8.0, "Y":6.95},
4+
{"Series":"I", "X":13.0, "Y":7.58},
5+
{"Series":"I", "X":9.0, "Y":8.81},
6+
{"Series":"I", "X":11.0, "Y":8.33},
7+
{"Series":"I", "X":14.0, "Y":9.96},
8+
{"Series":"I", "X":6.0, "Y":7.24},
9+
{"Series":"I", "X":4.0, "Y":4.26},
10+
{"Series":"I", "X":12.0, "Y":10.84},
11+
{"Series":"I", "X":7.0, "Y":4.81},
12+
{"Series":"I", "X":5.0, "Y":5.68},
13+
14+
{"Series":"II", "X":10.0, "Y":9.14},
15+
{"Series":"II", "X":8.0, "Y":8.14},
16+
{"Series":"II", "X":13.0, "Y":8.74},
17+
{"Series":"II", "X":9.0, "Y":8.77},
18+
{"Series":"II", "X":11.0, "Y":9.26},
19+
{"Series":"II", "X":14.0, "Y":8.10},
20+
{"Series":"II", "X":6.0, "Y":6.13},
21+
{"Series":"II", "X":4.0, "Y":3.10},
22+
{"Series":"II", "X":12.0, "Y":9.13},
23+
{"Series":"II", "X":7.0, "Y":7.26},
24+
{"Series":"II", "X":5.0, "Y":4.74},
25+
26+
{"Series":"III", "X":10.0, "Y":7.46},
27+
{"Series":"III", "X":8.0, "Y":6.77},
28+
{"Series":"III", "X":13.0, "Y":12.74},
29+
{"Series":"III", "X":9.0, "Y":7.11},
30+
{"Series":"III", "X":11.0, "Y":7.81},
31+
{"Series":"III", "X":14.0, "Y":8.84},
32+
{"Series":"III", "X":6.0, "Y":6.08},
33+
{"Series":"III", "X":4.0, "Y":5.39},
34+
{"Series":"III", "X":12.0, "Y":8.15},
35+
{"Series":"III", "X":7.0, "Y":6.42},
36+
{"Series":"III", "X":5.0, "Y":5.73},
37+
38+
{"Series":"IV", "X":8.0, "Y":6.58},
39+
{"Series":"IV", "X":8.0, "Y":5.76},
40+
{"Series":"IV", "X":8.0, "Y":7.71},
41+
{"Series":"IV", "X":8.0, "Y":8.84},
42+
{"Series":"IV", "X":8.0, "Y":8.47},
43+
{"Series":"IV", "X":8.0, "Y":7.04},
44+
{"Series":"IV", "X":8.0, "Y":5.25},
45+
{"Series":"IV", "X":19.0, "Y":12.50},
46+
{"Series":"IV", "X":8.0, "Y":5.56},
47+
{"Series":"IV", "X":8.0, "Y":7.91},
48+
{"Series":"IV", "X":8.0, "Y":6.89}
49+
]

0 commit comments

Comments
 (0)