<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Nexitor Blog]]></title><description><![CDATA[The beauty of Software development lies in the ability to bring ideas to life]]></description><link>https://blog.nexitor.io/</link><generator>Ghost 5.26</generator><lastBuildDate>Fri, 17 Apr 2026 04:35:26 GMT</lastBuildDate><atom:link href="https://blog.nexitor.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Evolution of Filehandling - What I have learned and when to use S3]]></title><description><![CDATA[<p>Storing files and images has come a long way, I myself went through all levels of storing and providing files in applications, here&apos;s what I have learned.</p><p>It&apos;s important to question yourself: &quot;How are the users in my application going to interact with files&quot;</p>]]></description><link>https://blog.nexitor.io/when-should-you-use-s3-to-store-files-and-images/</link><guid isPermaLink="false">6536b85aadd3120001a65732</guid><category><![CDATA[file]]></category><category><![CDATA[storage]]></category><category><![CDATA[deployment]]></category><category><![CDATA[Developers]]></category><category><![CDATA[docker]]></category><category><![CDATA[s3]]></category><category><![CDATA[database]]></category><category><![CDATA[how to]]></category><dc:creator><![CDATA[Tarik]]></dc:creator><pubDate>Thu, 16 Nov 2023 00:28:04 GMT</pubDate><media:content url="https://blog.nexitor.io/content/images/2023/11/28043-min.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.nexitor.io/content/images/2023/11/28043-min.jpg" alt="Evolution of Filehandling - What I have learned and when to use S3"><p>Storing files and images has come a long way, I myself went through all levels of storing and providing files in applications, here&apos;s what I have learned.</p><p>It&apos;s important to question yourself: &quot;How are the users in my application going to interact with files&quot;, to see which file-handling serves you the best. But we will talk about specific cases later, let&apos;s dive into the various methods you can handle and provide files to your users and what their upsides and downsides are.</p><h3 id="1-the-database"><strong>1. The Database</strong></h3><p>When a developer is raised his first logical thinking is &quot;Everything that needs to be persisted has to be in a database&quot;. Which is technically most of the times correct and in this case it also actually works. You could just convert your files into base64 or just directly store them in binary in a database table if it supports binary persistence. After that you could retrieve and provide those images to your users. </p><p>The <strong>benefits </strong>of this method is that you don&apos;t have to care much about file handling and also have a backup of them when you are doing backups of your database (which I hope you are constantly doing btw). This way it&apos;s also more portable since you only care about your database instead of caring about database and file storage.</p><p>This is however also one of the <strong>downsides</strong>. Imagine you are doing a backup once every 24 hours. Now when you have 10 Gb of files hosted inside your database, your backups are going to take longer and your backup server might hit its limits faster or you have to pay for more storage. Have you ever tried running a very large old database backup to production? It sucks. Additionally your performance is lacking behind since you have to retrieve large amounts of data to your backend. Is your file size larger than 256k? Then a file storage is more performant. Jim Gray wrote about this exact issue in <a href="https://www.microsoft.com/en-us/research/publication/to-blob-or-not-to-blob-large-object-storage-in-a-database-or-a-filesystem/?from=https://research.microsoft.com/apps/pubs/default.aspx?id=64525&amp;type=exact">To Blop or Not</a>.</p><p>If you are only saving users profile pictures, just use the database, it&apos;s the most convenient one in this case.</p><h3 id="2-the-file-system"><strong>2. The File System</strong></h3><p>Okey so let&apos;s say you have a system where users can upload files in their dashboards. For this case it&apos;s not wise to use a database as we have learned above, since we are dealing with many large files. So you could instead just use your servers file system, because that&apos;s where files are stored right? And we also learned in our first years how to interact with the file system via our code. So we build an endpoint in our service and two functions to save and retrieve files.</p><p>The <strong>upsides </strong>of this method is that we have a clean database with just pointers to the filenames and we have solved all of the database downsides.</p><p>The <strong>downsides </strong>however are a bit more crucial. Lets start with backups: you would have to create some kind of script to pack all files in the specific folder into an archive like a zip and move them somewhere safe. Now imagine you want to scale your backend, usually in this case you are running your service in a container, and for all containers to have access to the file storage folder you are storing your images in, you would have to create links to it. You see where this is going, so it&apos;s a bit of work to make it work magically.</p><h3 id="3-the-encapsuled-sftp-user"><strong>3. The Encapsuled SFTP User</strong></h3><p>So to solve the above mentioned issues we are just going to use a separate machine where we store and retrieve files from. We will also go as far as using SFTP instead of FTP and using a dedicated user who has only write and read access to the filestorage folder on that machine to increase our security. We will then write two functions in our backend code where we use an SFTP connection to store and retrieve files using the created users credentials.</p><p>The upsides of this are that we can for one create easier backups since we have a dedicated machine for our files and more importantly, we can now scale without any weird issues. We can also run an nginx on a seperate folder if we want to, to serve public files directly via a simple GET.</p><p>The big downside of this is however that it&apos;s just a great hassle to do. No one wants to create all of the steps needed to create the above solution. I did however because I started from the very beginning and worked myself through all methods back in the day. Also scaling and providing fast streams for different zones are going to be a challenge which you will only face though once you are playing in the big league.</p><h3 id="4-s3-minio"><strong>4. S3 &amp; MinIO</strong></h3><p>Now we are going to play with the big boys. The big data industry standard is something like S3, MinIO and other high performance file storage services. Usually you just deploy your container and it&apos;s ready to go. In MinIO for example you can just create a bucket with a custom policy where you define that the files inside the bucket are publicly accessible but the index contents not. Then you can run a web server in front of it and voil&#xE0;, you have a simple public image file server.</p><p>There are plenty of benefits, such as easy scalability, lots of integrations for your backend, UI dashboard for easy configuration &amp; directly crawling files and backups are simpler. You can self host your container or use dedicated host such as Amazon S3 that does the scaling part for you. It serves an overall easier handling and solves all of the issues we have had above.</p><h3 id="thank-you"><strong>Thank you!</strong> </h3><p>You did it, you have learned how the industry handles files and how we handled them back in the day. I hope you enjoyed reading! You want to read more like this?? Just subscribe with your email and you will be notified. I&apos;m not writing much, so it won&apos;t spam you, I promise!</p>]]></content:encoded></item><item><title><![CDATA[Automate your Deployments with Docker & Watchtower]]></title><description><![CDATA[Guide - How to construct your pipeline so your docker applications will fully automatically deploy to your servers.]]></description><link>https://blog.nexitor.io/automate-your-deployments-with-docker-and-watchtower/</link><guid isPermaLink="false">63b760d8ce31230001a25a86</guid><category><![CDATA[deployment]]></category><category><![CDATA[docker]]></category><category><![CDATA[watchtower]]></category><category><![CDATA[how to]]></category><category><![CDATA[easy]]></category><category><![CDATA[pipeline]]></category><category><![CDATA[simple]]></category><dc:creator><![CDATA[Tarik]]></dc:creator><pubDate>Fri, 06 Jan 2023 20:54:15 GMT</pubDate><media:content url="https://blog.nexitor.io/content/images/2023/01/467ba2a0a0719c6816f44c0daa3ca08e-min--1--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.nexitor.io/content/images/2023/01/467ba2a0a0719c6816f44c0daa3ca08e-min--1--1.jpg" alt="Automate your Deployments with Docker &amp; Watchtower"><p>I have had my fair deal with a variety of weird and over complicated, but secure deployment processes. You don&apos;t wish to say it, but sometimes complication makes things secure, <em>no pun intended</em>. Nevertheless, I have found my personal favorite, that I want to share with you guys with examples. It usually goes like this:<br><br><strong>Have a dockerizable application, and a private docker registry running.</strong></p><p>This deployment method IMO is one of the cleanest and easiest I have ever had. I won&apos;t go into the details of dockerizing your application and also running a private docker registry as those information are well documented. But I will go into the details of setting up your pipeline for Gitlab and Github (you can use any pipeline you want of course), and how to setup Watchtower. So let&apos;s get going!</p><h2 id="the-pipeline">The Pipeline</h2><p>You would want your pipeline to do 3 things:</p><ol><li>Build your application (Not for all applications needed)</li><li>Dockerize and build your image</li><li>Connect and push the image to your private docker registry</li></ol><p>For your Gitlab project it would look something like this:</p><pre><code class="language-Gitlab Pipeline">image: docker:latest

variables:
  IMAGE_NAME: registry.nexitor.io/my.image:$CI_COMMIT_REF_NAME

services:
    - docker:dind

BuildAndDeploy:
  script:
    - docker login registry.nexitor.io -u $REG_USER -p $REG_PASS
    - docker build -t $IMAGE_NAME .
    - docker push ${IMAGE_NAME}
  only:
    - staging
    - production
</code></pre><p>You can of course adjust the branches on which it will deploy, in this case we can use several instances of images differentiated by their tags. You can also dynamically set their names just like we do here with the tag, it&apos;s all up to you. Remember to set your project secrets in order to access them in your pipeline.</p><p>Let&apos;s go into an example with a Node project that you want to build outside of Docker, we will use Github Actions to that. This produces a little bit more pipeline code since we do all three steps in the pipeline:</p><pre><code class="language-github">name: Build Npm, Build Docker and push

on:
  push:
    branches: [ &quot;master&quot; ]

jobs:

  build_npm:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [14.x]

    steps:
    - uses: actions/checkout@v3
    - name: Use Node.js ${{ matrix.node-version }}
      uses: actions/setup-node@v3
      with:
        node-version: ${{ matrix.node-version }}
    - run: npm install    
    - run: npm run build
    -
      name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    - 
      name: Login to private registry
      uses: docker/login-action@v2
      with:
        registry: registry.nexitor.io
        username: ${{ secrets.REGISTRY_USERNAME }}
        password: ${{ secrets.REGISTRY_PASSWORD }}
    -
      name: Build and push
      uses: docker/build-push-action@v3
      with:
        context: .
        file: Dockerfile
        tags: registry.nexitor.io/my.image
        push: true</code></pre><h2 id="watchtower">Watchtower</h2><p>Now that we can automatically build and push our image onto our private registry by each Git Push or hg push...Mercurial.. <em>cough, any merc fans out there?..</em></p><p>Anyways, got a bit drifted off there. Now comes the big question of how we run/replace the current running container with the new one?</p><p>And that&apos;s where Watchtower comes into the game. It&apos;s a tool to monitor and check for updated image versions an just restarts (actually replaces) your current running one with the new image and with the same configurations (ports, envs....).</p><p>To run a local docker Watchtower instance just run the following docker compose file:</p><pre><code class="language-Watchtower">version: &apos;3&apos;

services:
  watchtower:
    container_name: watchtower
    image: containrrr/watchtower
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_LABEL_ENABLE=true
      - WATCHTOWER_INCLUDE_RESTARTING=true
      - REPO_USER=tom_likes_jerry
      - REPO_PASS=hunter123
      - WATCHTOWER_INCLUDE_STOPPED=true
    command: --interval 30</code></pre><p>As you can see, we want it to also connect to our private repository when an image with that host is given, that&apos;s why we pass him the credentials. Also we set the check interval to 30 so it checks each running container that has the watchtower label every 30 seconds for an updated image.</p><p>So for Watchtower to check your running containers you would need to label them as follows:</p><pre><code class="language-Docker">sudo docker run -d --name nexitor.image --label=&quot;com.centurylinklabs.watchtower.enable=true&quot; registry.nexitor.io/my.image</code></pre><p>you can also set that off and just check every running container but I do <strong>not</strong> recommend that as you don&apos;t want just random services getting restarted with different versions. For deactivating you would set the corresponding flag <strong>WATCHTOWER_LABEL_ENABLE </strong>to off.</p><h2 id="tadaaa">Tadaaa!</h2><p>Now you have your own fully automated deployment pipeline for your private services. I hope I could help you and maybe you also gained some knowledge. If you like my writing and want to read more, please consider subscribing and I will notify you when I release a new Post! I love sharing knowledge or discussing topics, so if you have topics you wish to know more about reach out to me! Thank you so much for reading this far!</p>]]></content:encoded></item></channel></rss>