* Refactor GraphQL client usage in services to improve consistency - Replaced direct calls to `getClientWithToken` with a new method `getGraphQLClient` across multiple services, ensuring a unified approach to obtaining the GraphQL client. - Updated the `BaseService` to manage the GraphQL client instance, enhancing performance by reusing the client when available. * Refactor UpdateProfile component to use useClientOnce for initial profile update - Replaced useEffect with useClientOnce to handle the first login profile update more efficiently. - Removed local state management for hasUpdated, simplifying the component logic. - Updated localStorage handling to ensure the profile update occurs only on the first login. * Refactor Apollo Client setup to improve modularity and maintainability - Introduced a new `createLink` function to encapsulate the link creation logic for Apollo Client. - Updated `createApolloClient` to utilize the new `createLink` function, enhancing code organization. - Simplified the handling of authorization headers by moving it to the link configuration. * Add cache-proxy application with initial setup and configuration - Created a new cache-proxy application using NestJS, including essential files such as Dockerfile, .gitignore, and .eslintrc.js. - Implemented core application structure with AppModule, ProxyModule, and ProxyController for handling GraphQL requests. - Configured caching with Redis and established environment variable management using Zod for validation. - Added utility functions for query handling and time management, enhancing the application's functionality. - Included README.md for project documentation and setup instructions. * Add cache-proxy service to Docker configurations and update deployment workflow</message> <message> - Introduced a new cache-proxy service in both `docker-compose.dev.yml` and `docker-compose.yml`, with dependencies on Redis and integration into the web and bot services. - Updated GitHub Actions workflow to include build and push steps for the cache-proxy image, ensuring it is deployed alongside web and bot services. - Modified environment variable management to accommodate the cache-proxy, enhancing the overall deployment process. - Adjusted GraphQL cached URL to point to the cache-proxy service for improved request handling. * Add health check endpoint and controller to cache-proxy service - Implemented a new HealthController in the cache-proxy application to provide a health check endpoint at `/api/health`, returning a simple status response. - Updated the AppModule to include the HealthController, ensuring it is registered within the application. - Configured a health check in the Docker Compose file for the cache-proxy service, allowing for automated health monitoring. * Update proxy controller and environment variable for cache-proxy service - Changed the route prefix of the ProxyController from `/proxy` to `/api` to align with the new API structure. - Updated the default value of the `URL_GRAPHQL_CACHED` environment variable to reflect the new route, ensuring proper integration with the GraphQL service. * Update cache proxy configuration for query time-to-live settings - Increased the time-to-live for `GetCustomer`, `GetOrder`, `GetService`, and `GetSlot` queries to 24 hours, enhancing cache efficiency and performance. - Maintained the existing setting for `GetSubscriptions`, which remains at 12 hours. * Enhance subscription management and configuration settings - Added new query time-to-live settings for `GetSlotsOrders`, `GetSubscriptionPrices`, and `GetSubscriptions` to improve caching strategy. - Implemented `hasTrialSubscription` method in `SubscriptionsService` to check for trial subscriptions based on user history, enhancing subscription management capabilities. - Updated GraphQL operations to reflect the change from `getSubscriptionSettings` to `GetSubscriptionSettings`, ensuring consistency in naming conventions. * fix build * Refactor subscription settings naming - Updated the naming of `getSubscriptionSettings` to `GetSubscriptionSettings` in the cache-proxy configuration for consistency with other settings.
131 lines
5.2 KiB
YAML
131 lines
5.2 KiB
YAML
name: Build & Deploy Web, Bot & Cache Proxy
|
|
|
|
on:
|
|
push:
|
|
branches:
|
|
- main
|
|
|
|
jobs:
|
|
build-and-push:
|
|
name: Build and Push to Docker Hub
|
|
runs-on: ubuntu-latest
|
|
outputs:
|
|
web_tag: ${{ steps.vars.outputs.web_tag }}
|
|
bot_tag: ${{ steps.vars.outputs.bot_tag }}
|
|
cache_proxy_tag: ${{ steps.vars.outputs.cache_proxy_tag }}
|
|
steps:
|
|
- name: Checkout code
|
|
uses: actions/checkout@v3
|
|
|
|
- name: Create fake .env file for build
|
|
run: |
|
|
echo "BOT_TOKEN=fake" > .env
|
|
echo "LOGIN_GRAPHQL=fake" >> .env
|
|
echo "PASSWORD_GRAPHQL=fake" >> .env
|
|
echo "URL_GRAPHQL=http://localhost/graphql" >> .env
|
|
echo "EMAIL_GRAPHQL=fake@example.com" >> .env
|
|
echo "NEXTAUTH_SECRET=fakesecret" >> .env
|
|
echo "BOT_URL=http://localhost:3000" >> .env
|
|
echo "REDIS_PASSWORD=fake" >> .env
|
|
echo "BOT_PROVIDER_TOKEN=fake" >> .env
|
|
|
|
- name: Set image tags
|
|
id: vars
|
|
run: |
|
|
echo "web_tag=web-${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
|
|
echo "bot_tag=bot-${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
|
|
echo "cache_proxy_tag=cache-proxy-${GITHUB_SHA::7}" >> $GITHUB_OUTPUT
|
|
|
|
- name: Login to Docker Hub
|
|
run: echo "${{ secrets.DOCKERHUB_TOKEN }}" | docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" --password-stdin
|
|
|
|
- name: Build web image
|
|
run: |
|
|
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/zapishis-web:${{ steps.vars.outputs.web_tag }} -f ./apps/web/Dockerfile .
|
|
|
|
- name: Push web image to Docker Hub
|
|
run: |
|
|
docker push ${{ secrets.DOCKERHUB_USERNAME }}/zapishis-web:${{ steps.vars.outputs.web_tag }}
|
|
|
|
- name: Build bot image
|
|
run: |
|
|
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/zapishis-bot:${{ steps.vars.outputs.bot_tag }} -f ./apps/bot/Dockerfile .
|
|
|
|
- name: Push bot image to Docker Hub
|
|
run: |
|
|
docker push ${{ secrets.DOCKERHUB_USERNAME }}/zapishis-bot:${{ steps.vars.outputs.bot_tag }}
|
|
|
|
- name: Build cache-proxy image
|
|
run: |
|
|
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/zapishis-cache-proxy:${{ steps.vars.outputs.cache_proxy_tag }} -f ./apps/cache-proxy/Dockerfile .
|
|
|
|
- name: Push cache-proxy image to Docker Hub
|
|
run: |
|
|
docker push ${{ secrets.DOCKERHUB_USERNAME }}/zapishis-cache-proxy:${{ steps.vars.outputs.cache_proxy_tag }}
|
|
|
|
deploy:
|
|
name: Deploy to VPS
|
|
needs: build-and-push
|
|
runs-on: ubuntu-latest
|
|
environment: production
|
|
steps:
|
|
- name: Checkout code
|
|
uses: actions/checkout@v3
|
|
|
|
- name: Setup SSH key
|
|
run: |
|
|
mkdir -p ~/.ssh
|
|
echo "${{ secrets.VPS_SSH_KEY }}" > ~/.ssh/id_rsa
|
|
chmod 600 ~/.ssh/id_rsa
|
|
ssh-keyscan -p ${{ secrets.VPS_PORT }} -H ${{ secrets.VPS_HOST }} >> ~/.ssh/known_hosts
|
|
|
|
- name: Ensure zapishis directory exists on VPS
|
|
run: |
|
|
ssh -i ~/.ssh/id_rsa -p ${{ secrets.VPS_PORT }} -o StrictHostKeyChecking=no ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} "mkdir -p /home/${{ secrets.VPS_USER }}/zapishis"
|
|
|
|
- name: Create real .env file for production
|
|
run: |
|
|
echo "BOT_TOKEN=${{ secrets.BOT_TOKEN }}" > .env
|
|
echo "LOGIN_GRAPHQL=${{ secrets.LOGIN_GRAPHQL }}" >> .env
|
|
echo "PASSWORD_GRAPHQL=${{ secrets.PASSWORD_GRAPHQL }}" >> .env
|
|
echo "URL_GRAPHQL=${{ secrets.URL_GRAPHQL }}" >> .env
|
|
echo "EMAIL_GRAPHQL=${{ secrets.EMAIL_GRAPHQL }}" >> .env
|
|
echo "NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}" >> .env
|
|
echo "BOT_URL=${{ secrets.BOT_URL }}" >> .env
|
|
echo "WEB_IMAGE_TAG=${{ needs.build-and-push.outputs.web_tag }}" >> .env
|
|
echo "BOT_IMAGE_TAG=${{ needs.build-and-push.outputs.bot_tag }}" >> .env
|
|
echo "CACHE_PROXY_IMAGE_TAG=${{ needs.build-and-push.outputs.cache_proxy_tag }}" >> .env
|
|
echo "DOCKERHUB_USERNAME=${{ secrets.DOCKERHUB_USERNAME }}" >> .env
|
|
echo "REDIS_PASSWORD=${{ secrets.REDIS_PASSWORD }}" >> .env
|
|
echo "BOT_PROVIDER_TOKEN=${{ secrets.BOT_PROVIDER_TOKEN }}" >> .env
|
|
|
|
- name: Copy .env to VPS via SCP
|
|
uses: appleboy/scp-action@master
|
|
with:
|
|
host: ${{ secrets.VPS_HOST }}
|
|
username: ${{ secrets.VPS_USER }}
|
|
key: ${{ secrets.VPS_SSH_KEY }}
|
|
port: ${{ secrets.VPS_PORT }}
|
|
source: '.env'
|
|
target: '/home/${{ secrets.VPS_USER }}/zapishis/'
|
|
|
|
- name: Copy docker-compose.yml to VPS via SCP
|
|
uses: appleboy/scp-action@master
|
|
with:
|
|
host: ${{ secrets.VPS_HOST }}
|
|
username: ${{ secrets.VPS_USER }}
|
|
key: ${{ secrets.VPS_SSH_KEY }}
|
|
port: ${{ secrets.VPS_PORT }}
|
|
source: 'docker-compose.yml'
|
|
target: '/home/${{ secrets.VPS_USER }}/zapishis/'
|
|
|
|
- name: Login and deploy on VPS
|
|
run: |
|
|
ssh -i ~/.ssh/id_rsa -p ${{ secrets.VPS_PORT }} -o StrictHostKeyChecking=no ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} "
|
|
cd /home/${{ secrets.VPS_USER }}/zapishis && \
|
|
docker login -u ${{ secrets.DOCKERHUB_USERNAME }} -p ${{ secrets.DOCKERHUB_TOKEN }} && \
|
|
docker compose pull && \
|
|
docker compose down && \
|
|
docker compose up -d
|
|
"
|