Blog

  • Persistent tmux sessions

    Persistent tmux sessions

    How to start a tmux session after login in a server using ssh, so that if the connection is lost, when I login back, I can go back to the same tmux session.

    Short answer (manual):

    1. SSH into the server: ssh user@server
    2. Start a named tmux session: tmux new -s mysession
    3. Work inside tmux. If your SSH connection drops, reconnect and reattach: ssh user@server tmux attach -t mysession
    4. Useful commands:
      • List sessions: tmux ls
      • Detach from inside tmux: Ctrl-b then d
      • Create or attach in one step: tmux new -As mysession
      • Kill a session: tmux kill-session -t mysession

    Automatic attach/create on login:

    Put this in your ~/.bash_profile (or ~/.profile) so SSH logins automatically attach to (or create) a session named “main”:

    if [[ $- == *i* ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ]; then
      exec tmux new -As main
    fi
    

    Explanation of the checks:

    • [[ $- == *i* ]] ensures the shell is interactive.
    • [ -n "$SSH_CONNECTION" ] ensures this runs only for SSH logins (not local scripts).
    • [ -z "$TMUX" ] avoids nesting tmux if you’re already inside one.
    • exec replaces your login shell with tmux (so Ctrl-d exits the session as expected).

    Caveats / tips:

    • Install tmux if it’s not present (apt/yum/pacman/etc).
    • Using tmux new -As name is handy: it attaches if the session exists, otherwise creates it.
    • If you want the option not to start tmux on some SSH connections, either remove the auto-start or use a different login method.
    • Sessions persist on the server when your client disconnects; reattaching resumes the same environment (processes continue running unless they depend on a client-held resource).

    That’s all you need to reliably get back to the same tmux session after reconnecting.

    Long answer (auto-resume optional):

    • a recommended automatic login snippet for bash/zsh
    • a safer variant that avoids interfering with scp/rsync or non-interactive sessions
    • a systemd-user approach to keep a tmux session running even when no one is logged in
    • tips, caveats and helpful extras
    1. Automatic attach/create on SSH login — simple solution Add this to the end of your ~/.bash_profile (or ~/.profile, or ~/.zprofile for zsh). It will exec tmux so your login shell is replaced by tmux (Ctrl-D will then log you out):
    if [[ $- == *i* ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ]; then
      exec tmux new -As main
    fi
    

    What the checks do:

    • [[ $- == *i* ]] — run only for interactive shells
    • [ -n "$SSH_CONNECTION" ] — run only for SSH sessions (not local)
    • [ -z "$TMUX" ] — don’t start tmux if you’re already inside one
    • exec replaces the shell with tmux (so closing tmux exits the session cleanly)
    1. Safer automatic startup (recommended) The simple snippet above is fine for many, but it can break non-interactive SSH uses (scp, rsync, git over SSH) and other tools. Use a slightly stricter check to run only when logging into a TTY and not during scp/rsync:
    # Put this in ~/.bash_profile or ~/.profile
    if [[ $- == *i* ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ] && [ -t 1 ]; then
      # Choose a session name; optionally include hostname
      session="main-$(hostname -s)"
      exec tmux new -As "$session"
    fi
    

    Notes:

    • [ -t 1 ] ensures stdout is a TTY (avoids running for scp/rsync/git that use no TTY)
    • Using hostname in the session name lets you use the same dotfiles across multiple servers and keep sessions distinct
    1. Variant: offer choice instead of automatically replacing shell If you prefer to be prompted (or want an option to bypass tmux), you can use this approach:
    if [[ $- == *i* ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ] && [ -t 1 ]; then
      session="main-$(hostname -s)"
      echo "Attaching to tmux session '$session' (press Ctrl-C to skip)"
      sleep 1
      tmux new -As "$session"
    fi
    

    This runs tmux but does not exec it; pressing Ctrl-C returns you to the normal shell.

    1. systemd user service (start tmux at boot / independent of login) If you want a tmux session to exist even when nobody is logged in (useful for long-running daemons you manage interactively), create a systemd –user unit that starts a tmux session for your user on boot. Example (~/.config/systemd/user/tmux@.service):
    [Unit]
    Description=tmux session %i
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/usr/bin/env tmux new-session -d -s %i
    ExecStop=/usr/bin/env tmux kill-session -t %i
    
    [Install]
    WantedBy=default.target
    

    Enable / start: systemctl --user enable --now tmux@main.service

    This creates a detached tmux session named “main” that persists while the system/user systemd is running. You can then attach from any SSH session with tmux attach -t main. This is a different model (tmux session lives independent of your shells).

    1. Extra useful tips and tools
    • Use tmux-resurrect / tmux-continuum to save/restore layout, pane commands and scrollback across reboots.
    • If you want multiple named sessions (per project), change session name logic in your login script (e.g., derive name from working directory).
    • Avoid nesting tmux: always check $TMUX.
    • If you use zsh, put the snippet in ~/.zprofile or ~/.zshrc with equivalent conditions.
    • To reattach manually: tmux attach -t main or tmux new -As main
    • To list sessions: tmux ls
    • Make sure tmux is installed: apt install tmux (Debian/Ubuntu), dnf install tmux, pacman -S tmux, etc.
    1. Caveats and troubleshooting
    • exec tmux replaces the login shell. That’s desirable for an always-on workflow, but if you rely on running commands automatically from .bash_profile or .bashrc for non-interactive tasks, verify behavior first.
    • If you need scp/git-over-ssh to continue to work, ensure [ -t 1 ] or similar check so automatic tmux startup doesn’t run for non-interactive connections.
    • If two people attach to the same session simultaneously, they will share the same terminal. This is normal and useful for pair-editing but can be surprising.
    • If your tmux session dies unexpectedly on disconnect, inspect server logs and tmux logs. Usually processes in tmux continue unless they get SIGHUP from some external cause.
    1. Example: final recommended ~/.bash_profile snippet Here’s a single snippet to copy-paste into ~/.bash_profile; it is safe for most environments:
    # Auto-attach to tmux on SSH interactive login
    if [[ $- == *i* ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ] && [ -t 1 ]; then
      session="main-$(hostname -s)"
      # Use exec to replace login shell; remove exec if you want to be able to Ctrl-C out
      exec tmux new -As "$session"
    fi
    

    Snippets for bash, zsh and fish with tmux to start only sometimes

    Below are ready-to-copy snippets for bash, zsh and fish that will offer to attach/create a tmux session on SSH interactive logins — but won’t force it every time. Each snippet:

    • runs only on interactive SSH logins
    • avoids starting tmux if already inside one
    • checks for a TTY (so scp/rsync/git-over-ssh won’t be affected)
    • prompts (with a short timeout) so you can skip easily
    • can be disabled entirely by setting NO_AUTO_TMUX in your environment

    Bash (put in ~/.bash_profile or ~/.profile)

    # Offer to attach/create tmux on interactive SSH login (bash)
    if [[ $- == *i* ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ] && [ -t 1 ] && [ -z "$NO_AUTO_TMUX" ]; then
      session="main-$(hostname -s)"
      printf "Attach to tmux session '%s'? [Y/n] (auto Y in 5s) " "$session"
      # read one char with 5s timeout
      read -t 5 -n 1 answer
      printf "\n"
      answer=${answer:-Y}
      case "$answer" in
        [Yy]) tmux new -As "$session" ;;
        *) ;;  # skip
      esac
    fi
    

    Zsh (put in ~/.zprofile or ~/.zshrc; ~/.zprofile for login shells)

    # Offer to attach/create tmux on interactive SSH login (zsh)
    if [[ -o interactive ]] && [ -n "$SSH_CONNECTION" ] && [ -z "$TMUX" ] && [ -t 1 ] && [ -z "$NO_AUTO_TMUX" ]; then
      session="main-$(hostname -s)"
      printf "Attach to tmux session '%s'? [Y/n] (auto Y in 5s) " "$session"
      # zsh: read one key with timeout
      read -k 1 -t 5 answer
      printf "\n"
      answer=${answer:-Y}
      case "$answer" in
        [Yy]) tmux new -As "$session" ;;
        *) ;;  # skip
      esac
    fi
    

    Fish (put in ~/.config/fish/config.fish)

    # Offer to attach/create tmux on interactive SSH login (fish)
    if status --is-interactive; and set -q SSH_CONNECTION; and not set -q TMUX; and test -t 1; and not set -q NO_AUTO_TMUX
      set session main-(hostname -s)
      printf "Attach to tmux session '%s'? [Y/n] (auto Y in 5s) " $session
      # read one char with timeout (-t 5) and max chars (-n 1)
      read -t 5 -n 1 answer
      printf "\n"
      if test -z "$answer" -o "$answer" = "Y" -o "$answer" = "y"
        tmux new -As $session
      end
    end
    

    Notes and tips

    • Session naming: these snippets use session="main-$(hostname -s)" so each host gets its own session. Change “main” if you prefer something else.
    • tmux command: tmux new -As name will attach to the existing session or create it if missing.
    • To skip on a particular login: press Ctrl-C (before the timeout) or press ‘n’ when prompted.
    • To globally disable the prompt (e.g., from an automated login or a wrapper), set NO_AUTO_TMUX (e.g., NO_AUTO_TMUX=1 ssh host, or export NO_AUTO_TMUX in a wrapper env).
    • If you prefer the login shell to be replaced by tmux (so Ctrl-D exits the session), change tmux new -As "$session" to exec tmux new -As "$session" in the snippet.
  • Real client ip for Magento 2

    Real client ip for Magento 2

    How I made Magento 2 see the real client IP behind HAProxy + Varnish + Nginx (with Cloudflare)

    If you run Magento 2 behind multiple layers, it’s common to lose the original client IP and end up with your proxy or load balancer address everywhere (logs, admin actions, rate-limiters, captchas). Here’s the setup that finally gave me the real IP all the way to PHP and Magento:

    • HAProxy: add X-Forwarded-For
    • Varnish: normalize X-Forwarded-For (favor Cloudflare CF-Connecting-IP if present; otherwise keep chain or fall back to client.ip)
    • Nginx: trust the Varnish IP and use X-Forwarded-For to set the real client IP, then pass that header to PHP-FPM

    Architecture

    Client (optionally Cloudflare) -> HAProxy -> Varnish -> Nginx -> PHP-FPM (Magento 2)

    Step 1: HAProxy — add the client IP once

    In your HAProxy frontend that handles HTTP/HTTPS:

    frontend fe_http
        bind :80
        option forwardfor if-none
        default_backend be_varnish
    
    frontend fe_https
        bind :443 ssl crt /etc/haproxy/certs
        option forwardfor if-none
        http-request set-header X-Forwarded-Proto https if { ssl_fc }
        default_backend be_varnish
    
    backend be_varnish
        server varnish 127.0.0.1:6081
    

    Notes:

    • option forwardfor adds X-Forwarded-For with the client source IP and, with if-none, won’t overwrite an existing header (e.g., from Cloudflare).
    • Set X-Forwarded-Proto so downstream knows the original scheme.

    Step 2: Varnish — normalize X-Forwarded-For

    I used this in vcl_recv to prefer Cloudflare’s CF-Connecting-IP when present, otherwise keep what we already have, and finally fall back to client.ip:

    if (req.http.CF-Connecting-IP) {
        set req.http.X-Forwarded-For = req.http.CF-Connecting-IP;
    } else if (req.http.X-Forwarded-For) {
        # keep existing header (CF/HAProxy might have set it)
        set req.http.X-Forwarded-For = req.http.X-Forwarded-For;
    } else {
        set req.http.X-Forwarded-For = client.ip;
    }
    

    Why this order:

    • When using Cloudflare, CF-Connecting-IP is the canonical client address and is safer than trusting a possibly client-spoofed X-Forwarded-For arriving at the edge.
    • If you’re not on Cloudflare, HAProxy will have added X-Forwarded-For, so we preserve it.
    • If neither is set, we fall back to the direct client.ip seen by Varnish.

    Optional in Varnish: ensure X-Forwarded-Proto is forwarded correctly as well (if HAProxy set it, just pass it through).

    Step 3: Nginx — trust Varnish and apply the real IP

    Tell Nginx which upstream IPs are trusted to provide X-Forwarded-For and then use that header as the source of truth. Also, pass the header through to PHP-FPM.

    In http { }:

    # Trust only your proxy chain. At minimum, trust Varnish:
    set_real_ip_from 127.0.0.1;    # or your Varnish host/IP
    real_ip_header X-Forwarded-For;
    real_ip_recursive on;
    
    # Optional: useful for debugging
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" xff="$http_x_forwarded_for"';
    access_log /var/log/nginx/access.log main;
    

    Inside your PHP location block:

    location ~ \.php$ {
        include fastcgi_params;
    
        # Make sure PHP sees the normalized header
        fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for;
    
        # Usual PHP-FPM pass
        fastcgi_pass unix:/run/php/php-fpm.sock;
    }
    

    Critical pieces:

    • set_real_ip_from must include your Varnish IP(s). Only trust known proxies. If Nginx also sits behind HAProxy directly in some paths, add those IPs too.
    • real_ip_header X-Forwarded-For makes Nginx rewrite $remote_addr to the real client IP, using the trusted chain.
    • real_ip_recursive on walks the header from right to left until it finds the first untrusted address, which becomes the real IP.
    • Passing HTTP_X_FORWARDED_FOR to PHP ensures Magento/PHP code that inspects that header can still see the chain (while REMOTE_ADDR becomes the real client IP thanks to real_ip_header).

    Magento 2 impact

    • PHP’s $_SERVER[‘REMOTE_ADDR’] now shows the actual client address (thanks to Nginx real_ip_module).
    • $_SERVER[‘HTTP_X_FORWARDED_FOR’] is available as the normalized chain (thanks to the fastcgi_param line).
    • Admin actions, logs, rate-limiters, and security modules that rely on REMOTE_ADDR behave correctly.
    • If you use Cloudflare, you’ll get the exact visitor IP via CF-Connecting-IP promoted into X-Forwarded-For at the Varnish layer.

    Verification checklist

    • From the shell on your web host:
    • tail -f /var/log/nginx/access.log and confirm $remote_addr shows the real IP and xff shows what you expect.
    • In Varnish:
    • varnishlog -g request -i ReqHeader:X-Forwarded-For and confirm the header holds the correct client IP.
    • In PHP:
    • Create a small phpinfo() or var_dump($_SERVER) page temporarily and confirm REMOTE_ADDR equals the client IP and HTTP_X_FORWARDED_FOR is set.
    • In Magento admin:
    • Check recent logs, orders, or security modules for real client IPs.

    Common pitfalls

    • Trust only your proxies. Never set set_real_ip_from 0.0.0.0/0. That allows clients to spoof X-Forwarded-For.
    • If you’re on Cloudflare and want to trust CF directly in Nginx, you can set real_ip_header CF-Connecting-IP and add all Cloudflare IP ranges to set_real_ip_from. The Varnish normalization shown above is often simpler to maintain.
    • Ensure HAProxy uses option forwardfor if-none so it doesn’t stomp a legitimate X-Forwarded-For coming from an upstream like Cloudflare.
    • Don’t forget to pass HTTP_X_FORWARDED_FOR to PHP-FPM:
      fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for;

    That’s it. With HAProxy setting X-Forwarded-For, Varnish normalizing it, and Nginx trusting only the Varnish IP and passing the header to PHP, Magento 2 finally sees the real visitor IP end-to-end.

  • MySQL Performance for Beginners: Right-size InnoDB Cache and Manage Binary Logging

    MySQL Performance for Beginners: Right-size InnoDB Cache and Manage Binary Logging

    If your MySQL server feels slow, chances are your hot data or indexes don’t fit in memory, or your logging is forcing too many disk syncs. This beginner-friendly guide shows how to size InnoDB’s cache based on what you actually read, which other caches matter, and how binary logging affects performance.

    1) Right-size the InnoDB buffer pool using index size and hot data
    Goal: fit your “working set” in memory. That working set is:

    • Index pages of the most-read tables
    • Data pages for the most-read rows
    • Plus some headroom for change buffering, temp pages, and concurrency

    Step A: Measure index sizes

    • Per table:
      SELECT table_schema, table_name,
      ROUND(index_length/1024/1024, 1) AS index_mb,
      ROUND(data_length/1024/1024, 1) AS data_mb
      FROM information_schema.tables
      WHERE engine = 'InnoDB'
      AND table_schema NOT IN ('mysql','sys','performance_schema','information_schema')
      ORDER BY index_length DESC;
    • Per schema:
      SELECT table_schema,
      ROUND(SUM(index_length)/1024/1024, 1) AS total_index_mb,
      ROUND(SUM(data_length)/1024/1024, 1) AS total_data_mb
      FROM information_schema.tables
      WHERE engine = 'InnoDB'
      GROUP BY table_schema
      ORDER BY total_index_mb DESC;

    Step B: Find your “most-read” tables
    If you have the sys schema (MySQL 5.7+/8.0):

    • Top tables by reads:
      SELECT table_schema, table_name, rows_fetched
      FROM sys.schema_table_statistics
      ORDER BY rows_fetched DESC
      LIMIT 20;

    Alternative with performance_schema:

    • Top tables by read time:
      SELECT OBJECT_SCHEMA AS table_schema, OBJECT_NAME AS table_name, SUM_TIMER_READ
      FROM performance_schema.table_io_waits_summary_by_table
      WHERE OBJECT_SCHEMA NOT IN ('mysql','sys','performance_schema','information_schema')
      ORDER BY SUM_TIMER_READ DESC
      LIMIT 20;

    Step C: Estimate the “hot set”
    Join the hot tables from Step B to information_schema.tables to sum index and data sizes for just those hot tables:

    WITH hot AS (
    SELECT s.table_schema, s.table_name
    FROM sys.schema_table_statistics AS s
    ORDER BY s.rows_fetched DESC
    LIMIT 20
    )
    SELECT
    ROUND(SUM(t.index_length)/1024/1024/1024, 1) AS hot_index_gb,
    ROUND(SUM(t.data_length)/1024/1024/1024, 1) AS hot_data_gb
    FROM information_schema.tables t
    JOIN hot h
    ON t.table_schema = h.table_schema AND t.table_name = h.table_name
    WHERE t.engine = 'InnoDB';

    This gives a practical target for what should be memory-resident most of the time.

    Step D: Choose a buffer pool size

    • Dedicated DB server: make innodb_buffer_pool_size roughly 60–75% of RAM.
    • If RAM is limited, aim to fit at least the hot indexes plus a healthy portion of hot data. A simple rule: target 1.2x to 1.5x the hot_index + hot_data estimate, capped by the 60–75% guideline.
    • Example: hot_index = 20 GB and hot_data = 15 GB → target 42–52 GB; with a 64 GB server, set innodb_buffer_pool_size to ~48 GB.

    Also configure:

    • innodb_buffer_pool_instances: 4–8 instances for pools > 8–16 GB (MySQL 8.0 auto-scales, but setting 4–8 avoids contention on large pools).
    • Enable warmup:
    • innodb_buffer_pool_dump_at_shutdown = ON
    • innodb_buffer_pool_load_at_startup = ON

    Step E: Verify it worked

    • Buffer pool hit ratio (aim for > 99% on read-heavy workloads):
      SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read%';
      Hit ratio ≈ 1 – (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests)
    • Check disk reads dropping after increasing the pool.
    • Use sys.schema_table_statistics_with_buffer to see buffer hit rates by table.

    Notes and caveats:

    • information_schema sizes are estimates; compressed and dynamic row formats vary.
    • Growth and workload change over time—revisit quarterly.

    2) Other caching settings that help
    Beyond the buffer pool, these settings reduce CPU and I/O churn:

    • InnoDB redo log buffer
    • innodb_log_buffer_size: if you run large or bursty transactions, increase to 64–256 MB to reduce log writes mid-transaction.
    • Adaptive Hash Index (AHI)
    • innodb_adaptive_hash_index = ON by default. It can speed up repeated equality lookups on the same index keys. In highly concurrent or non-uniform workloads it can add contention; test ON vs OFF with real workload.
    • Change buffering (secondary index change cache)
    • innodb_change_buffering = all (default) helps insert/update workloads on secondary indexes by deferring disk IO. For read-mostly workloads, leave it ON; for write-heavy with large memory you can test off, but most benefit from leaving it on.
    • Table and definition cache
    • table_open_cache and table_definition_cache: prevent re-opening tables. If Opened_tables or Opened_table_definitions grows quickly under steady load, raise these. Typical ranges: thousands to tens of thousands on busy servers.
    • table_open_cache_instances: multiple instances reduce lock contention on the cache.
    • Thread cache
    • thread_cache_size: if Threads_created increases rapidly, raise this so connections reuse threads.
    • OS caching and double buffering
    • innodb_flush_method = O_DIRECT (Linux) to avoid double-buffering in the OS page cache, letting the buffer pool be the primary cache.
    • Put data, redo logs, and binlogs on fast, separate storage if possible.
    • Query cache (only MySQL 5.7 and earlier)
    • Disable it (query_cache_type=0, query_cache_size=0). It harms concurrency; MySQL 8.0 removed it.
    • Application-level caching
    • Use connection pooling and prepared statements in your driver.
    • Cache hot result sets in your app or a cache layer (Redis/Memcached) to avoid unnecessary reads.

    3) Binary logging and performance: what to know
    What binary logging does:

    • Records every committed transaction in the binary log for replication and point-in-time recovery (PITR).
    • Adds CPU work to format events and extra I/O to persist them. The fsync frequency is controlled by sync_binlog.

    Performance impact:

    • Additional write and fsync on each commit (especially with sync_binlog = 1).
    • Extra memory for per-session binlog caches; potential temp files if caches overflow.
    • Overhead varies widely by workload; OLTP with many small commits feels it most.

    If you keep binlog enabled (recommended for most production):

    • Keep durability but reduce overhead
    • sync_binlog = 1 for maximum safety; consider values > 1 (e.g., 100) to batch fsyncs if you can risk losing up to N transactions on OS crash.
    • Pair with innodb_flush_log_at_trx_commit:
      • 1: safest (fsync redo on every commit).
      • 2: fsync redo once per second; may lose up to 1 second of data on crash.
    • binlog_row_image = MINIMAL to log only changed columns, shrinking binlog size.
    • binlog_transaction_compression = ON (MySQL 8.0) to compress row events.
    • binlog_cache_size and binlog_stmt_cache_size large enough to keep most transactions in memory (e.g., 1–16 MB) to avoid temp files.
    • Place binlogs on fast storage separate from data/redo to isolate I/O.
    • Consider group-commit tuning:
      • binlog_group_commit_sync_delay (microseconds) and binlog_group_commit_sync_no_delay_count can batch commits, trading tiny latency for throughput.

    Advantages of disabling binary logging:

    • Lower transaction latency and higher throughput on write-heavy OLTP by removing binlog writes and fsyncs.
    • Less disk usage and IO on the server.
    • Less CPU spent formatting and compressing binlog events.
    • Smaller memory footprint per connection (binlog caches).

    Disadvantages of disabling binary logging:

    • You lose replication. You cannot create replicas or use Group Replication/InnoDB Cluster/async replicas later without reconfiguring and taking a fresh snapshot.
    • You lose point-in-time recovery. Backups can only restore to the snapshot time; no replay of changes since the last backup.
    • You lose certain auditing/troubleshooting options that rely on reading binlogs.
    • GTIDs cannot be used; some tooling and operational patterns depend on them.

    When is disabling reasonable?

    • Single-instance dev/test or ephemeral environments.
    • Simple read-mostly systems where you accept snapshot-only backups and can rebuild from source of truth.
    • Even then, consider keeping binlog ON and tuning sync_binlog for a safer middle ground.

    Practical checklist

    • Measure hot tables and estimate hot indexes + hot data.
    • Size innodb_buffer_pool_size to fit the hot set, within 60–75% of RAM on a dedicated server.
    • Enable buffer pool dump/load to warm caches on restart.
    • Tune table_open_cache, thread_cache_size, and innodb_log_buffer_size based on observed metrics.
    • Use O_DIRECT to avoid double buffering.
    • Keep binlog ON for most production setups; tune sync_binlog, row image, and compression; put binlogs on fast storage.
    • Consider disabling binlog only when you explicitly do not need replication or PITR.

    Always make one change at a time, monitor with real workload, and compare throughput, latency, and disk IO before and after.

  • Self-generated certificates

    Self-generated certificates

    What they are, how mTLS works, how to build them with easy-rsa, and how to store them safely with git-crypt.

    Certificates, CA certificates, and private keys

    • Digital certificate (X.509): A signed data structure that binds an identity (subject) to a public key. It includes fields like subject, issuer, serial number, validity period, and extensions (for example, key usage, extended key usage, Subject Alternative Name). Certificates are public and can be shared.
    • CA certificate: A certificate belonging to a Certificate Authority. A CA uses its private key to sign end-entity certificates (server or client). A root CA is self-signed. Often, you use an offline root CA to sign an intermediate CA, and that intermediate signs end-entity certificates. Clients and servers trust a CA by installing its certificate (trust anchor) and validating chains: end-entity → intermediate(s) → root.
    • Private key: The secret counterpart to a public key. It is used to prove possession (signing) and decrypt data for which the public key was used to encrypt in certain schemes. Private keys must be kept confidential, access-controlled, and ideally encrypted at rest with a passphrase or stored in hardware (TPM/HSM). If a private key is compromised, all certificates tied to it must be considered compromised and should be revoked.

    Notes:

    • “Self-signed certificate” means a certificate signed by its own key (typical for root CAs, and sometimes used ad hoc for a server). “Self-generated” is commonly used to mean you run your own CA and issue your own certs, rather than buying from a public CA.
    • Revocation is handled using CRLs (Certificate Revocation Lists) or OCSP. easy-rsa focuses on CRLs.

    Server vs client certificates and how mTLS works

    • Server certificate:
      • Purpose: Server proves its identity to clients (for example, a web server to a browser).
      • Extensions: Extended Key Usage (EKU) must include serverAuth.
      • Names: Must contain Subject Alternative Name (SAN) entries covering the hostnames or IPs the client connects to. Clients verify that the requested hostname matches a SAN and that the certificate chains to a trusted CA.
    • Client certificate:
      • Purpose: Client proves its identity to the server (for example, a service or user accessing an API).
      • Extensions: EKU should include clientAuth.
      • Names: Often the Common Name (CN) or a SAN identifies the user, device, or service. The server maps this identity to an account or role for authorization.
    • mTLS (mutual TLS):
      1. Client initiates the TLS handshake.
      2. Server sends its certificate chain. Client validates the chain to a trusted CA and checks the hostname/IP against SANs.
      3. Server requests a client certificate. Client sends its certificate chain and proves possession of the private key.
      4. Server validates the client’s certificate against its trusted CA(s) and applies authorization rules.
      5. Both sides derive session keys; the connection is encrypted and mutually authenticated.

    Operational considerations:

    • Distribute only CA certificates (public) to trust stores on clients/servers.
    • Protect private keys; rotate and revoke as needed.
    • Keep CRLs up to date on servers that verify client certs.

    Generating and maintaining certificates with easy-rsa

    easy-rsa is a thin wrapper around OpenSSL that maintains a PKI directory and simplifies key/cert lifecycle. Commands below are for easy-rsa v3.

    Install:

    • Debian/Ubuntu: sudo apt-get install easy-rsa
    • RHEL/CentOS/Fedora: sudo dnf install easy-rsa
    • macOS (Homebrew): brew install easy-rsa

    Initialize a new PKI and configure defaults:
    mkdir corp-pki && cd corp-pki easyrsa init-pki

    Create a file named vars in this directory to set defaults. Example vars:

    set_var EASYRSA_ALGO ec
    set_var EASYRSA_CURVE secp384r1
    set_var EASYRSA_DIGEST "sha256"
    set_var EASYRSA_REQ_COUNTRY "US"
    set_var EASYRSA_REQ_PROVINCE "CA"
    set_var EASYRSA_REQ_CITY "San Francisco"
    set_var EASYRSA_REQ_ORG "Example Corp"
    set_var EASYRSA_REQ_OU "IT"
    set_var EASYRSA_REQ_CN "Example-Root-CA"
    set_var EASYRSA_CA_EXPIRE 3650
    set_var EASYRSA_CERT_EXPIRE 825
    set_var EASYRSA_CRL_DAYS 30

    Build a root CA (ideally on an offline machine):
    $ easyrsa build-ca
    (Use build-ca nopass only for labs; in production, protect the CA key with a passphrase and keep the CA host offline.)

    Optional: two-tier CA (recommended for production):

    • On an offline host, create an offline root CA; keep it offline and backed up.
    • On an online or semi-online host, create an intermediate CA by generating a CSR there and signing it with the offline root. In easy-rsa that means setting up two PKIs:
      1. Root PKI: build-ca (self-signed root).
      2. Intermediate PKI:
        easyrsa init-pki; easyrsa build-ca
        … but here you actually want an intermediate: generate-req for “intermediate” and sign it with the root using sign-req ca on the root environment.
        Then use the intermediate to sign servers/clients.
        If you’re new to this, start with a single CA and evolve to a root + intermediate later.

    Generate a server key and CSR:
    $ easyrsa gen-req web01 nopass
    This creates:

    • pki/private/web01.key (private key)
    • pki/reqs/web01.req (CSR)

    Sign the server certificate:
    Basic:
    $ easyrsa sign-req server web01

    Adding SANs:

    • easy-rsa 3.1 and newer supports a CLI flag:
      $ easyrsa --subject-alt-name="DNS:web01.example.com,IP:203.0.113.10" sign-req server web01
    • For older versions, edit pki/x509-types/server to include a subjectAltName line, or upgrade. A common pattern is to create a custom x509 type that adds:
      subjectAltName = @alt_names
      [ alt_names ]
      DNS.1 = web01.example.com
      IP.1 = 203.0.113.10

    Results are placed in pki/issued/web01.crt. Verify:
    opensslverifyCAfilepki/ca.crtpki/issued/web01.crt openssl x509 -in pki/issued/web01.crt -noout -text

    Generate a client certificate:
    easyrsagenreqalicenopass easyrsa sign-req client alice

    Distribute artifacts:

    • Servers: web01.key (private), web01.crt (server cert), CA chain (ca.crt and any intermediates).
    • Clients (for mTLS): alice.key (private), alice.crt (client cert), CA chain used by the server if the client also needs to verify the server.

    Revocation and CRL:

    • Revoke a certificate:
      $ easyrsa revoke alice
    • Regenerate the CRL:
      $ easyrsa gen-crl
    • Install pki/crl.pem wherever revocation is enforced (for example, on servers that validate client certs). Refresh it periodically; controlled by EASYRSA_CRL_DAYS.

    Renewal and rotation:

    • Easiest and safest: issue a new key and cert before expiry, deploy it, then revoke the old cert.
    • Keep pki/index.txt, pki/serial, and the entire pki directory backed up; they are the authoritative database of your PKI.

    Diffie-Hellman parameters:

    • Only needed by some servers or VPNs still using finite-field DHE:
      $ easyrsa gen-dh
    • Modern TLS with ECDHE does not require dhparam files.

    Good practices:

    • Use strong algorithms: EC (secp384r1) or RSA 3072/4096.
    • Use SANs for server certificates; clients validate hostnames against SANs, not CNs.
    • Limit cert lifetimes and automate rotation.
    • Protect private keys with passphrases when possible and with strict filesystem permissions (chmod 600).

    Keeping private keys safe with Git and git-crypt

    Goal: version and collaborate on your PKI (CA database, issued certs, CRLs), while ensuring private keys are encrypted at rest in the Git repository and on remotes.

    How git-crypt works:

    • You mark specific paths as “encrypted” via .gitattributes.
    • git-crypt encrypts those files in the repository objects and on remotes. When authorized users unlock locally, files are transparently decrypted in the working tree.
    • Access can be granted with GPG public keys (recommended) or with a shared symmetric key.

    Set up a repository and protect sensitive paths:

    $ cd corp−pki
    $ git init
    $ git-crypt init

    Create .gitattributes with rules such as:

    pki/private/** filter=git-crypt diff=git-crypt
    pki/reqs/** filter=git-crypt diff=git-crypt
    *.key filter=git-crypt diff=git-crypt

    Then:

    git add .gitattributes
    git commit -m "Protect private material with git-crypt"

    Authorize collaborators (GPG-based):
    $ git-crypt add-gpg-user YOUR_GPG_KEY_ID
    Repeat for each user who should be able to decrypt. They must have your repository and their corresponding private key to unlock.

    Working with the repo:

    • After initializing and adding users, add your PKI directory content. Private keys and CSRs under the protected paths will be encrypted in Git history and on the remote.
    • Push to a remote as usual; the remote stores ciphertext for protected files.

    Cloning and unlocking:

    gitclone<repo>
    cd <repo>
    $ git-crypt unlock

    For GPG-based access, your local GPG agent will prompt; for symmetric, provide the shared key.

    Pre-commit guard (optional but smart):

    • Add a pre-commit hook that aborts if any file containing a private key would be committed outside protected paths. Example logic:
      • If a staged file contains “—–BEGIN PRIVATE KEY—–” (or RSA/EC PRIVATE KEY), check with “git check-attr filter <file>” that git-crypt will encrypt it; otherwise fail the commit with guidance.
    • Also .gitignore unencrypted exports or temporary files.

    CI/CD:

    • On CI, install git-crypt, import a CI-specific GPG private key (or provide the symmetric key via the CI secret store), and run git-crypt unlock before build/deploy steps.
    • Never print secrets to logs; restrict artifact access.

    Caveats and best practices:

    • If you accidentally committed a secret before adding git-crypt rules, it is already in history. You must rewrite history (for example, with git filter-repo) and rotate the secret.
    • Keep the root CA private key offline and out of Git entirely when possible. If you must keep it in Git, ensure it is strongly protected: encrypted by git-crypt, passphrase-protected, and access tightly controlled.
    • Public artifacts (CA certificate, issued certificates, CRLs) can remain unencrypted, but assess privacy needs; certs can contain identifying info.
    • Enforce least privilege in Git hosting: only grant git-crypt decryption rights to people or systems that truly need the private materials.
    • Combine with full-disk encryption and strict filesystem permissions (chmod 600 on keys). Consider hardware-backed GPG keys for git-crypt.

    Quick end-to-end example

    • Create a CA and a server/client cert:mkdir corp-pki && cd corp-pki easyrsa init-pki
      easyrsa build−ca
      easyrsa gen-req web01 nopass
      easyrsa −−subject−alt−name="DNS:web01.example.com" sign−reqserverweb01

      easyrsa gen-req alice nopass
      easyrsa sign−req client alice

      easyrsa gen-crl
    • Put under Git with encryption of sensitive files:
      git init
      git-crypt init
      printf "pki/private/∗∗ filter=git−crypt diff=git−crypt" "\npki/reqs/∗∗ filter=git−crypt diff=git−crypt" "\n∗.key filter=git−crypt diff=git−crypt" > .gitattributes

      git add .
      git commit −m "PKI bootstrap with protected private material"

      git remote add origin <your-remote>
      git push −u origin main

      git-crypt add-gpg-user <YOUR_GPG_KEY_ID>
      git commit -m "Grant decryption to maintainer"

      git push
    • Test mTLS with curl:
      On server: install web01.key and web01.crt; configure to require client certs and trust ca.crt.
      On client:
      curl --cacert pki/ca.crt --cert pki/issued/alice.crt --key pki/private/alice.key https://web01.example.com/

    With these patterns you can own the full lifecycle: generate, distribute, rotate, and revoke certificates; enforce mTLS; and keep the sensitive pieces encrypted even when stored in Git and on remote servers.

  • Stunnel: the simplest way to stitch private services across clouds

    Stunnel: the simplest way to stitch private services across clouds

    If you need to connect private TCP services across machines and clouds without deploying a full VPN, stunnel is often the fastest, safest, and least fussy way to do it. It wraps any TCP connection in TLS, giving you encryption and authentication with a single lightweight daemon and a short config file. Because it rides over standard outbound Internet connectivity and can multiplex multiple services on one port, stunnel makes multi-cloud private networking practical without changing routing, installing kernel modules, or re-architecting your apps.

    What stunnel is and why it’s different

    • A TLS wrapper for TCP: stunnel terminates and initiates TLS, then forwards bytes to a local or remote TCP port. Your apps keep speaking their native protocols (PostgreSQL, Redis, MySQL, MQTT, custom services) and don’t need to know about TLS.
    • Simple by design: single binary, tiny configuration, no kernel changes, no overlay networks. It’s closer to “secure netcat” than to a VPN.
    • Runs anywhere: Linux, BSD, macOS, Windows, containers. Package-managed on most distros.
    • Production-hardened: in use for decades, based on OpenSSL/LibreSSL, with features like mutual TLS, chroot, dropping privileges, OCSP/CRL, and strict cipher control.

    Why stunnel is ideal for multi-cloud private service connectivity

    • Works over the public Internet, safely: mutual TLS authenticates both sides; traffic is end-to-end encrypted. You can keep your upstream services bound to localhost or private IPs and expose only stunnel.
    • No network plumbing: no VPC peering, no IPSec/WireGuard setup, no route tables. Just open a TCP port on the server side (often 443) and allow outbound on the client side.
    • One port, many services: stunnel can use TLS SNI to multiplex several backends on a single public IP/port (usually 443), so you can traverse strict egress firewalls and simplify security groups.
    • Multi-provider friendly: run a small stunnel on each cloud VM. Your app connects to localhost; stunnel handles the secure hop across clouds.
    • Incremental: add one service at a time. No need to rewire everything into a mesh or L3 VPN.

    Common patterns
    1) Hub-and-spoke

    • A central “hub” server exposes port 443 with stunnel.
    • Each “spoke” (in any cloud) runs a client-mode stunnel that dials the hub and provides a local port for the application to connect to.
    • Good for small teams and many readers of a few central services.

    2) Service-to-service bridges

    • One stunnel instance front-ends a private service on the server.
    • Another stunnel instance on the consumer side exposes a local port that connects to the remote stunnel over TLS.
    • Great for connecting databases, queues, or internal APIs across clouds, regions, or on-prem to cloud.

    3) Single IP, many services via SNI

    • Use one public IP:443 and multiple service blocks on the server, each with an SNI hostname (e.g., pg.example.com, redis.example.com).
    • Clients set the matching SNI name per service and reuse the same remote IP and port.

    Minimal, practical example

    Goal: An app in Cloud A consumes PostgreSQL and Redis running in Cloud B without exposing either service directly.

    Certificates

    • Create a small private CA and issue server and client certificates, or use ACME/Let’s Encrypt for the server and a private CA for clients.
    • Put CA certificate on both sides. Put server cert/key on server, client cert/key on client. Enable mutual TLS.

    Server (Cloud B) example configuration
    Global options:

    • setuid = stunnel
    • setgid = stunnel
    • chroot = /var/lib/stunnel
    • output = /var/log/stunnel.log
    • debug = info
    • sslVersionMin = TLSv1.2
    • options = NO_RENEGOTIATION
    • cert = /etc/stunnel/server.crt
    • key = /etc/stunnel/server.key
    • CAfile = /etc/stunnel/ca.crt
    • verify = 2 (require and verify client certs)

    PostgreSQL service:

    • [pg]
    • accept = 0.0.0.0:443
    • sni = pg.example.com
    • connect = 127.0.0.1:5432

    Redis service:

    • [redis]
    • accept = 0.0.0.0:443
    • sni = redis.example.com
    • connect = 127.0.0.1:6379

    Notes:

    • Both services share port 443 and are selected by SNI. Keep PostgreSQL and Redis bound to localhost; only stunnel is public.
    • If you prefer separate ports, use accept = 443 for pg and accept = 444 for redis, and omit SNI.

    Client (Cloud A) example configuration
    Global options:

    • client = yes
    • output = /var/log/stunnel.log
    • debug = info
    • sslVersionMin = TLSv1.2
    • cert = /etc/stunnel/client.crt
    • key = /etc/stunnel/client.key
    • CAfile = /etc/stunnel/ca.crt
    • verifyChain = yes
    • OCSPaia = yes (optional, enables OCSP via AIA if using public CAs)

    PostgreSQL local endpoint:

    • [pg]
    • accept = 127.0.0.1:5432
    • connect = hub.public.ip.or.name:443
    • sni = pg.example.com
    • checkHost = pg.example.com
    • delay = yes (resolve DNS at connect time)

    Redis local endpoint:

    • [redis]
    • accept = 127.0.0.1:6379
    • connect = hub.public.ip.or.name:443
    • sni = redis.example.com
    • checkHost = redis.example.com
    • delay = yes

    Now your applications point at localhost:5432 and localhost:6379. Stunnel carries traffic securely to Cloud B and into the private services.

    Multi-cloud high availability tips

    • Multiple upstreams: specify connect as a list on the client (e.g., connect = ip1:443 ip2:443). Use failover = rr for round-robin or default priority order for failover.
    • DNS and rotation: use delay = yes so DNS is re-resolved at connect time; pair with multiple A records.
    • Health checks: stunnel logs to syslog; integrate log monitoring. You can also run a simple TCP health probe against the client’s local accept ports.

    Security hardening checklist

    • TLS policy: set sslVersionMin = TLSv1.2 or TLSv1.3, define strong ciphers/ciphersuites if you have compliance needs.
    • Mutual TLS everywhere: verify = 2 on server, verifyChain and checkHost/checkIP on client.
    • Least privilege: setuid/setgid to a dedicated user; use chroot; restrict filesystem permissions on keys.
    • Certificate lifecycle: automate renewal (ACME for server certs), HUP stunnel to reload. For client cert rotation, use short lifetimes or CRLs (CRLfile) if revocation is needed.
    • Don’t enable TLS compression. Keep NO_RENEGOTIATION enabled.
    • Firewalls: only expose your stunnel port(s); keep backends on loopback or private subnets.

    Operational conveniences

    • Single-port multiplexing with SNI reduces security group sprawl and helps traverse locked-down networks that only allow 443 egress.
    • Defer hostname resolution with delay = yes to survive IP changes without restarts.
    • Transparent proxying is available on Linux if you must preserve source IPs for the backend, but it requires advanced routing/iptables and capabilities; most deployments don’t need it.
    • Systemd integration is straightforward; most packages install a service unit. Send SIGHUP to reload configs and new certs.

    How stunnel compares to alternatives

    • WireGuard/OpenVPN: full L3 VPNs that stitch networks and routes. Great for broad connectivity, but more moving parts, privileged setup, and potential blast radius. Stunnel is easier for a few explicit services.
    • SSH tunnels: quick and familiar but harder to manage at scale, weaker policy and TLS compatibility, and less robust for multi-tenant multiplexing.
    • NGINX/HAProxy/Caddy (TCP/stream): more features and L7 routing, but heavier and often oriented to server-side termination. Stunnel is tiny, neutral, and equally happy on client or server.
    • Service meshes: powerful but complex. Stunnel is the opposite: minimal and manual, ideal when you just need secure pipes.

    When stunnel is not a fit

    • UDP traffic (e.g., DNS, some message brokers) is out of scope.
    • Dynamic multi-hop routing, discovery, or policy-based connectivity requires a mesh/VPN or SD-WAN solution.
    • If you must expose original client IPs to backends without extra networking, you’ll need transparent proxying or different tooling.

    10-minute quickstart checklist
    1) Install stunnel on both ends from your distro packages.
    2) Create or obtain certificates, place CA on both sides, server/client keys on their respective nodes.
    3) Write one service block per backend. On the server, map accept (public) to connect (private). On the client, map a local accept to remote connect, set sni and checkHost.
    4) Open firewall for the server’s public port (often 443). Ensure client can reach it outbound.
    5) Start stunnel, watch logs, test with your app against the client’s local port.
    6) Add services incrementally; consider SNI to reuse the same public port.

    Bottom line
    Stunnel is the pragmatic sweet spot for securely connecting multiple private services across publicly reachable servers and multiple cloud providers. It gives you strong TLS, mutual authentication, and multi-service multiplexing with minimal operational overhead. For teams that want secure, explicit connections rather than full-blown network overlays, stunnel is often the simplest and most reliable tool for the job.

  • SQLSTATE[HY000]: General error: 1419

    SQLSTATE[HY000]: General error: 1419

    Me:

    Magento gives me this error: SQLSTATE[HY000]: General error: 1419 You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable), query was: CREATE TRIGGER trg_catalog_category_product_after_insert AFTER INSERT ON catalog_category_product FOR EACH ROW. How can I make my user capable to create triggers?

    AI:

    That error means MySQL is refusing the trigger because binary logging is enabled and the server requires either the SUPER privilege or the server global log_bin_trust_function_creators=1 to allow creation of triggers/functions by non‑SUPER users.

    You have three safe options (in order of preference):

    1) Give the user the TRIGGER privilege (recommended first)

    • This is required to create triggers at all:
      mysql> GRANT TRIGGER ON your_db.* TO ‘magento_user’@’host’;
      mysql> FLUSH PRIVILEGES;
    • Then retry. If the server’s binary logging policy still blocks the operation you’ll see the same 1419 error — proceed to option 2.

    2) Allow function/trigger creators by setting log_bin_trust_function_creators=1

    • Preferred if you cannot/should not grant SUPER. Set it in my.cnf/my.ini under [mysqld]:
      [mysqld]
      log_bin_trust_function_creators = 1
      then restart MySQL.
    • You can also set it at runtime (needs SUPER):
      mysql> SET GLOBAL log_bin_trust_function_creators = 1;
    • On managed services (RDS/Aurora) change the parameter group to set log_bin_trust_function_creators = 1 and reboot the DB instance.

    3) (Not recommended) Grant SUPER to the user

    • Gives broad privileges, avoid unless absolutely necessary and you trust the account:
      mysql> GRANT SUPER ON . TO ‘magento_user’@’host’;
      mysql> FLUSH PRIVILEGES;

    Quick checks

    • See current value:
      mysql> SHOW GLOBAL VARIABLES LIKE ‘log_bin_trust_function_creators’;
    • See current grants:
      mysql> SHOW GRANTS FOR ‘magento_user’@’host’;

    Notes

    • On hosted DBs (RDS) you cannot GRANT SUPER; use option 2 via parameter group.
    • After making the change, retry the Magento operation that tried to create the trigger.
  • Magento locked base URL

    Magento locked base URL

    The following content has been generated with AI’s help.

    Human:

    I want to configure Magneto’s base url, both http and https, also admin, to pick the configuration from either env.php or config.php and lock changing this in the admin interface. Guide me how to do it.

    AI:

    Below is a practical, safe way to make Magento read your base URLs from code (env.php or config.php) and prevent admins from changing them in the backend. I cover (A) the recommended, supported approach using Magento’s config dump (app/etc/config.php) which locks the fields, and (B) how to put values in env.php if you prefer environment-specific settings. I also show how to verify and how to revert.

    Summary (recommended)

    • Use the CLI to set the base URLs, then run bin/magento app:config:dump to persist those settings into app/etc/config.php. When config values live in app/etc/config.php they are read-only in the Admin UI (locked).

    Important notes before you begin

    • Always keep backups of app/etc/env.php and app/etc/config.php before changing them.
    • Base URLs must include a trailing slash, e.g. https://example.com/
    • For HTTPS to work, ensure your webserver and certificates are configured correctly.
    • Run these steps during maintenance window on production.

    A. Recommended: Export the base URL settings into app/etc/config.php (locks admin UI)
    1) Set the base URLs (optional if already set in Admin):

    • CLI (recommended):
      bin/magento config:set web/unsecure/base_url http://example.com/
      bin/magento config:set web/secure/base_url https://example.com/
      (also set secure flags if needed)
      bin/magento config:set web/secure/use_in_frontend 1
      bin/magento config:set web/secure/use_in_adminhtml 1

    2) Dump the configuration to files so it becomes file-based (and locked in Admin):

    • Dump the specific paths you want locked:
      bin/magento app:config:dump web/unsecure/base_url web/secure/base_url web/secure/use_in_frontend web/secure/use_in_adminhtml
    • This writes the configuration into app/etc/config.php under the “system” section.

    3) Deploy / clear caches:
    bin/magento cache:flush
    (If in production mode, run static content & di compile as needed.)

    4) Verify:

    • Admin: Go to Stores > Configuration > General > Web. The fields that are now file-backed will be disabled and show a message like “This configuration is stored in the configuration file and cannot be edited in the Admin.”
    • CLI: bin/magento app:config:status will show which settings are file-based.

    5) To change the values in the future:

    • Edit app/etc/config.php manually (or use app:config:import if you manage configuration differently), then deploy and flush cache.
    • After removing a key from config.php, the Admin will be able to edit that setting again (unless overridden elsewhere).

    B. Alternative: Put per-environment values in app/etc/env.php

    • Use this if you want environment-specific values (for example different dev/stage/prod hosts).
    • Magento reads env.php early; config keys in env.php can act as defaults or environment overrides (useful for deploy-time settings).

    Example snippet to add to app/etc/env.php (merge into the array that file returns):
    ‘system’ => [
    ‘default’ => [
    ‘web’ => [
    ‘unsecure’ => [
    ‘base_url’ => ‘http://example.com/’
    ],
    ‘secure’ => [
    ‘base_url’ => ‘https://example.com/’,
    ‘use_in_frontend’ => ‘1’,
    ‘use_in_adminhtml’ => ‘1’
    ]
    ]
    ]
    ],

    • Save file and flush cache: bin/magento cache:flush
    • Note: values in DB (core_config_data) may still override these; to make them effectively authoritative you should remove DB entries or use the config dump approach (A). The env.php method is best for environment-specific defaults but the explicit locking mechanism is the app/etc/config.php dump method.

    C. Admin area URL (backend frontName) and admin base specifics

    • The admin frontName is set in env.php under ‘backend’ => [‘frontName’ => ‘admin_xyz’] — this is file-based and not editable in Admin.
    • Admin uses the normal base URLs (web/secure/base_url and web/unsecure/base_url). If you need the admin on a separate host, you can set store- or website-scoped values or use config paths like web/unsecure/base_link_url or specify scope-specific config (dump all required paths).

    D. Extras / verification / troubleshooting

    • See current file-backed config entries:
      bin/magento app:config:status
    • To remove a file-backed setting:
    • Remove the key(s) from app/etc/config.php (or use bin/magento app:config:dump:new if used) and flush cache. Then Admin can change them again.
    • If you manually edit app/etc/config.php or env.php, make sure file permissions are correct and re-deploy caches.

    E. Example workflow for typical production deployment

    1. On local or staging set and test config values via Admin or bin/magento config:set.
    2. When ready to lock, run:
      bin/magento app:config:dump web/unsecure/base_url web/secure/base_url web/secure/use_in_frontend web/secure/use_in_adminhtml
    3. Commit app/etc/config.php to version control and deploy to production.
    4. Flush cache on deploy.

    That’s it — recommended: use the app:config:dump flow to persist + lock base URLs in app/etc/config.php. Use env.php only if you require environment-specific configuration as a fallback. If you want, tell me your Magento version and whether you host admin on the same domain or a different hostname and I’ll provide the exact command list and a config.php/env.php example tailored to your case.

  • No more free to use open source

    No more free to use open source

    Question:

    Considering the changes of licensing of different open source projects, forcing in going in paid subscription mode to be able to continue to use the software, what would you recommend as a strategy for a small business?

    Answer:

    Short answer: Treat “license drift” as a normal business risk. Prefer tech that’s genuinely open and community‑governed, keep an exit plan for every critical component, and be ready to either pay for value or switch to a community fork when it’s cheaper/safer.

    A practical strategy for a small business

    Principles

    • Favor community governance over single‑vendor control. Prefer projects under neutral foundations (Apache, CNCF, Eclipse, Linux Foundation) or with multiple independent maintainers.
    • Prefer permissive, patent‑granting licenses. Bias toward Apache‑2.0, MIT, BSD, MPL‑2.0. Be cautious with “source‑available” (BSL, SSPL, Elastic License, Confluent Community License, RSAL) and strong copyleft (AGPL) if you’re a SaaS.
    • Pay where it reduces risk. A modest subscription for a critical, undifferentiated component can be cheaper than migration or self‑maintaining security patches.
    • Always have an exit strategy. Assume any vendor can relicense, reprice, or rate‑limit.

    Immediate steps (30–60 days)
    1) Inventory your stack

    • Generate an SBOM for apps and infra (use Syft, CycloneDX). Include cloud services and major self‑hosted apps.
    • Record: component, version, license, business criticality, who maintains it.

    2) Classify license risk

    • Green: Apache‑2.0, MIT, BSD, MPL‑2.0.
    • Yellow: GPL/LGPL (usually fine for internal use; watch obligations when linking/redistributing).
    • Red: AGPL for anything you link into proprietary code or run as part of a SaaS; source‑available (BSL/SSPL/etc.) for foundational components.
    • Flag single‑vendor projects with Contributor License Agreements that allow unilateral relicensing.

    3) Identify at‑risk dependencies

    • Mark critical components that are single‑vendor or source‑available, or where the project recently relicensed.

    4) Define an “exit hatch” per critical component
    For each red/yellow:

    • Option A: Budget to pay the vendor (support/subscription) for stability.
    • Option B: Pin to last truly open version short‑term while you plan.
    • Option C: Migrate to a community fork or open alternative.
    • Option D: Buy third‑party LTS/security backports if available.

    5) Add license guardrails to your dev process

    • Set an “allowed licenses” policy in CI (Snyk, FOSSA, Black Duck, OWASP Dependency‑Track or OSV-Scanner + custom rules).
    • Turn on license alerts in Dependabot/Renovate and require approval for red licenses.
    • Keep NOTICE/attribution files and comply with obligations.

    Medium term (1–3 quarters)

    • Architecture for replaceability
    • Use open standards and portable data formats. Avoid deep use of vendor‑exclusive features unless it’s strategic.
    • Add abstraction layers (ORM for DBs, S3‑compatible storage APIs, standard protocols).
    • Containerize and codify infra (IaC) so you can redeploy elsewhere quickly.
    • Procurement and contracts
    • For paid software/services: require advance notice on material license or pricing changes, capped renewal increases, data export guarantees, and clear SLAs.
    • For critical SaaS: verify data portability and run periodic restore/migration drills.
    • Monitoring and governance
    • Assign an owner (even part‑time) to watch license changes, project health, and release notes.
    • Review the SBOM quarterly; re‑score risk and update plans.
    • Support the ecosystem
    • Consider small sponsorships or Tidelift for components you rely on to improve sustainability and response to CVEs.

    When choosing or replacing components

    • Databases: Prefer PostgreSQL, MariaDB/MySQL community for greenfield. If you rely on MongoDB features, weigh the cost of Atlas/support vs. migrating; alternatives include PostgreSQL JSONB or FerretDB (Mongo wire‑compat on Postgres).
    • Search: If you’re concerned about Elastic’s licensing, consider OpenSearch (community fork).
    • Caching: If Redis’s licensing is an issue for you, consider Valkey (Linux Foundation fork) or ensure your usage is compliant and budget support.
    • IaC and secrets: Terraform (BSL) → OpenTofu (MPL‑2.0). Vault (BSL) → consider OpenBao (MPL‑2.0) if it meets your needs.
    • Observability: Prometheus (Apache‑2.0); Grafana and related projects are AGPL—fine to use as separate services, but avoid linking AGPL code into proprietary products.
    • Streaming: Apache Kafka core is Apache‑2.0; Confluent adds components under a community license—know the boundary.

    Notes on AGPL and SaaS

    • Using an AGPL application unmodified as a separate service is generally fine; if you modify it, you must publish your modifications.
    • Do not link AGPL libraries into proprietary code unless you’re willing to open source your code under AGPL.

    Cost/risk decision framework

    • If the component is mission‑critical and migration cost is high: prefer paying for support/subscription with protective contract terms.
    • If there’s a healthy, foundation‑backed alternative or fork: pilot migration and plan a phased cutover.
    • If you pin to an older open version: ensure you have a security plan (backports, WAF, isolation) and a time‑boxed path off the fork.

    Operational tips

    • Keep regular, tested backups in open formats and documented runbooks for restoring to alternative stacks.
    • Track CVEs for pinned components; subscribe to security advisories.
    • Maintain version and license pins in manifests to avoid surprise upgrades.

    This is not legal advice. For any gray areas (especially AGPL, SaaS distribution, or source‑available terms), consult counsel.

  • Introducing HTTP 5xx Help — Fixing the errors that cost you customers

    Introducing HTTP 5xx Help — Fixing the errors that cost you customers

    I’m Madalin, the developer behind HTTP 5xx Help. If you’ve ever stared at a “500 Internal Server Error,” “502 Bad Gateway,” or “504 Gateway Timeout” while customers refresh and revenue ticks away, this site is for you.

    HTTP 5xx Help is a new home for two things:

    • Practical, no-nonsense guides on diagnosing and fixing website and web application failures.
    • Hands-on services to get your site stable, fast, and secure—and keep it that way.

    What “HTTP 5xx” means—and why it matters
    HTTP 5xx errors indicate something went wrong on the server side. Unlike 4xx errors (often user or client issues), 5xx errors are on us—the infrastructure, the app, or the integration points. They’re noisy, frustrating, and often avoidable with the right visibility and practices. My goal is to help small businesses, startups, and solo founders resolve incidents quickly, understand what caused them, and reduce the chance they happen again.

    Who I work with

    • Small businesses and local services running a website or online booking system
    • SaaS and e-commerce owners who can’t afford downtime
    • Agencies needing a reliable backend and ops partner
    • Developers who want an extra set of eyes on infra, performance, or security

    Services offered
    Whether you need a quick fix or a deep dive, I tailor the engagement to your stack and constraints.

    Incident response and recovery

    • Rapid triage of 5xx errors (500, 502, 503, 504), application crashes, and outages
    • Log and metrics analysis (journalctl, systemd, Nginx/Apache, PHP-FPM, Node, Python, Go)
    • Rollback/blue-green strategies, hotfixes, and stabilization plans

    Root cause analysis and prevention

    • RCA write-ups in plain English with technical detail for your team
    • Resilience improvements (timeouts, retries, circuit breakers, health checks)
    • Runbooks and checklists for future incidents

    Performance and scalability

    • Profiling and tuning for PHP, JavaScript/Node.js, Python, and Go services
    • Database optimization (indexes, slow queries, connection pooling)
    • Caching, CDN integration, and queueing for stability under load

    Infrastructure and operations

    • Linux server configuration and hardening
    • Web server and proxy tuning (Nginx, Apache), TLS, HTTP/2/3
    • Containers and orchestration (Docker, Kubernetes, Incus), CI/CD pipelines
    • Monitoring and observability setup (Prometheus/Grafana, ELK/EFK, OpenTelemetry)
    • Backups, disaster recovery, and cost-conscious cloud strategies

    Security and network hygiene

    • Security baselines for small teams (least privilege, secrets management, SSH hygiene)
    • WAF/CDN configuration (rate limiting, bot rules, DDoS protections)
    • Dependency and vulnerability scanning, patching workflows
    • Practical guidance to reduce risk without slowing your team down

    Web development support

    • Bug fixes, refactoring, and maintainability improvements
    • API reliability (timeouts, rate limits, validation, idempotency)
    • Migrations (shared hosting to VPS/cloud, monolith to containers) without vendor lock-in

    Simple ways to work together

    • Quick Fix: Targeted help to resolve a specific error or outage and stabilize.
    • Deep Dive: Comprehensive audit of your app and infra with a prioritized action plan.
    • Ongoing Care: Proactive monitoring, updates, and monthly improvements.

    You’ll get clear estimates, transparent communication, and documentation you can keep—no black boxes.

    What the blog will cover
    Expect actionable, copy-paste friendly content grounded in real-world troubleshooting:

    • Diagnosing common 5xx errors
    • 500 errors in PHP apps (WordPress, Laravel) and how to surface real stack traces
    • 502/504 in Nginx with PHP-FPM or upstream services
    • Node/Express timeouts and memory leaks
    • Python (Django/Flask) gunicorn/uvicorn misconfigurations
    • Go net/http and reverse proxy gotchas
    • Ops playbooks and checklists
    • First 15 minutes of an incident: what to check before changing anything
    • Safe rollbacks and canary deploys
    • Logging and metrics that actually help at 2 a.m.
    • Performance patterns
    • Caching strategies that won’t corrupt data
    • Database tuning for small teams
    • CDN setup that balances cost and speed
    • Security hygiene
    • TLS done right (OCSP, HSTS, TLSv1.3) without breaking older clients
    • Practical WAF rules and rate limits
    • Secrets management 101
    • Infrastructure how-tos
    • Nginx and Apache configs that survive traffic spikes
    • Docker/Kubernetes/Incus deployment recipes
    • Observability with Prometheus/Grafana and OpenTelemetry
    • Postmortems and lessons learned
    • Anonymized incident reviews with step-by-step fixes and prevention tips

    Why HTTP 5xx Help

    • Full-stack perspective: app code, runtime, OS, network, and cloud
    • Tooling-agnostic: choose what fits your budget and team
    • Plain language: clear explanations, not jargon
    • Knowledge transfer: I leave you with runbooks, dashboards, and next steps

    Join me from day one

    • Subscribe to the blog for guides and templates you can use immediately.
    • Reach out if you’re fighting recurring 5xx errors, slow pages, or security worries.
    • If you’re an agency or developer, I’m happy to collaborate as your backend/ops partner.

    Your website should be fast, reliable, and safe—and your team should feel confident running it. HTTP 5xx Help exists to make that your default state. If you’re ready to spend less time firefighting and more time shipping, let’s talk.

  • Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!