Rsync and encrypted SSH keys

Unfortunately rsync does not cooperate with $SSH_ASKPASS the way scp and ssh do.

Meanwhile, using scp is absolutely terrible on slow connections where files are being updated... To scp updates of a program to 4 servers (including one in China with the GFW) it was taking ~20 minutes. Running it with rsync where only deltas of the binary are sent, it happens in under 15 seconds.

So rsync is totally worth using.

Let's say that this is the rsync job that I want to run:

rsync --chown app:app -e 'ssh -p 2222' \
  --progress ./my-app \

All you need to do is add a -v to the part that calls ssh, and then grep for the Sending command line, like this:

rsync --chown app:app -e 'ssh -v -p 2222' \
  --progress /tmp/foo \
  app@MYSERVER:/home/app 2>&1 \
  | grep 'Sending command'

Enter passphrase for key '/home/me/.ssh/id': 
debug1: Sending command: rsync --server -oge.LsfxCIvu --log-format=X --usermap=\\*:app --groupmap=\\*:app . /home/app

The output of our rsync --server command is EXACTLY the only thing we'll allow in our ~/.ssh/authorized_keys on our server, and then we'll append:


So generate a new ssh key that we'll use just for rsyncing these files:

ssh-keygen -f rsync_key -t ed25519 -q -N ""

Then, on your server, edit the authorized keys file to add the content of But we'll prepend the command= parameters to restrict the key to this exact rsync command:

command="rsync --server -oge.LsfxCIvu  \
--log-format=X --usermap=\\*:app \
--groupmap=\\*:app . /home/app", \
no-pty,no-agent-forwarding,no-port-forwarding \
ssh-ed25519 AAAACFAKE_KEY_GENERATED_ONLY_FOR_TESTBOk7MpJi9jXfs+           
      ↳ 4lEOvpQFAKE_RSYNC \

Note that authorized keys lines must be on ONE LINE, so when you actually paste it in, it will look like:

command="rsync --server -oge.LsfxCIvu  --log-format=X --usermap=\\*:app --groupmap=\\*:app . /home/app", no-pty,no-agent-forwarding,no-port-forwarding ssh-ed25519 AAAACFAKE_KEY_GENERATED_ONLY_FOR_TESTBOk7MpJi9jXfs+           
      ↳ 4lEOvpQFAKE_RSYNC me@myhostname

How slow is Visual Studio Code?

Why does vscode feel so slow? Well, just look at the startup time:

❯ time code-oss && pkill -9 -f code-oss
Warning: 'app' is not in the list of known options, but still passed to Electron/Chromium.
'code-oss' time: 0.089s, cpu: 99%

❮ time nvim -c ":q"
'nvim -c ":q"' time: 0.025s, cpu: 31%

I used vscode for a while... Good programming language support is far more important than UI speed when using an editor.

VSCode did something amazing that had never been done before though: it developed an excellent standard for LSP, the Language Server Protocol, making it possible to connect these LSPs to other editors.

Thanks to vscode, it's now possible to get excellent programming language support into neovim!


Embedding dependencies directly in Go (GoLang) projects with //go:embed filename

One of the features that I like the most about Go compared to other langages is how easy distribution of binaries can be.

Go statically links most dependancies by default, meaning that typically just one binary is requried for your program to run.

However most programs have text or images embedded in them, and pull these resources in at runtime. If these exact same resources are always going to be pulled into your program at runtime, you're not saving any memory, disk space or network usage by opening them on program startup. Instead, you can speed up your whole program, and simplify app deployment, but directly embedding these resources.

Previously I use go Packr to handle this, but now Go has built in support for embedding that is amazing!

To embed the content of ./hello.txt Just import _ "embed" and use the special //go:embed <filename> comment on the line before your variable is initialized:

package content
import _ "embed"

//go:embed hello.txt
var Hello string

Then in other parts of your program, you can just fmt.Println(content.Hello) , and you've saved youself from slowly opening a file on disk, handling errors when the file doesn't load, and now your binary doesn't need to depend on these files!

You can even use wildcards and embed a whole virtual filesystem:

//go:embed image/*
//go:embed html/index.html
var content embed.FS

and then later serve up that filesystem with:

http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.FS(content))))

You may want to refer to the documentation at:


How do variables really work in Dockerfiles?

Whether you're naming a Dockerfile ARG or ENV variable or a regular shell script variable, inside your Dockerfile they're all referenced as simply $var. For Docker Compose commands there's a special $$var syntax for variables that you don't want Docker Compose to interpolate.

Every docker RUN command is a completely separate process/environment, so if you're using a regular shell script variable, setting and getting the variable must all be done within one RUN command.

If you need to share a shell script variable $var across several RUN commands, it's definitely preferable to try and 1) refactor it into a single RUN command or 2) set the value in a build-arg and write a wrapper script. If neither of those work, then you should just read and write your variable value to/from files. For example:

RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1

If your RUN command works in your shell but not via your Dockerfile RUN, it's likely a quoting issue.

  1. Docker RUN commands are hard coded to use /bin/sh -c. On many systems sh will be dash or ash or another very minimal shell with slightly different rules than the shell you typically use.
  2. Use a RUN echo ... command to make sure that any variables your command depends on are in fact set to what you think they're set to.
  3. Try to make the command you're having trouble with the first command in your file so that you can keep running your docker build with --no-cache and still minimize your wait time.
  4. When I was troubleshooting what also ended up being a quoting problem, I should have written my simplified test version like so:
RUN SHELL_PATH=$(head -n 1 /etc/shells) &&\
    useradd --shell $SHELL_PATH --uid 1000 foo

Even if you use zsh as your normal shell, I would recommend using bash to test your RUN commands like:

/bin/sh -c 'SHELL_PATH=$(head -n 1 /etc/shells) &&\
    useradd --shell $SHELL_PATH --uid 1000 foo'
  1. Dockerfile ARG values will overwrite any shell script variables that you set… For example, say we have this Dockerfile
FROM alpine:latest

ARG ArgFoo
ENV EnvFoo="Must be Set"

RUN echo "Value of ArgFoo is $ArgFoo"
RUN echo "Value of EnvFoo is $EnvFoo"
RUN ShFoo="awesome" && echo "Value of ShFoo is $ShFoo"
RUN echo "Our \$ShFoo is Gone Again: $ShFoo"

When we run:

$ docker build . --no-cache

Step 4/7 : RUN echo "Value of ArgFoo is $ArgFoo"
Value of ArgFoo is

Step 5/7 : RUN echo "Value of EnvFoo is $EnvFoo"
Value of EnvFoo is Must be Set

Step 6/7 : RUN ShFoo="awesome" && echo "Value of ShFoo is $ShFoo"
Value of ShFoo is awesome

Step 7/7 : RUN echo "Our \$ShFoo is Gone Again: $ShFoo"
Our $ShFoo is Gone Again:

If we ran it again, but this time with a --build-arg setting $ShFoo surprisingly, it's still difficult to get ourselves into trouble. First, update the Dockerfile to try and cause some trouble with $ShFoo

FROM alpine:latest

RUN echo "$ShFoo" && ShFoo="awesome" && echo "Value of ShFoo is $ShFoo"

$ docker build . --build-arg ShFoo="Difficult to cause trouble" --no-cache

Step 3/3 : RUN echo "$ShFoo" && ShFoo="awesome" && echo "Value of ShFoo is $ShFoo"
Difficult to cause trouble
Value of ShFoo is awesome

So Docker does a surprisingly good job of allowing you to reference any variable as just $var and everything just working.

The main difference when writing RUN commands really has more to do with /bin/sh -c than it does with Docker.

For example, I was working on a Dockerfile that would automatically set the permissions of the running Docker container to match the current user. Ultimately the command that worked was like this:

RUN shell=$(grep -E -m 1 \.\*\\b$USER_SHELL\\b /etc/shells) && \
    echo "DUMP: $shell $USER_ID:$GROUP_ID $USER_NAME:$GROUP_NAME" && \
    groupadd --gid $GROUP_ID $GROUP_NAME && \  
    useradd --shell $shell --uid $USER_ID --gid $GROUP_ID $USER_NAME

The subshell portion is:

grep -E -m 1 \.\*\\b$USER_SHELL\\b /etc/shells

Yet when executing in a normal shell I only need to use:

grep -E -m 1 '.*\bzsh\b' /etc/shells

When you have a fairly complex shell command inside of a docker RUN the format above of assigning a shell variable, then echoing everything so that you can be sure you really have what you think you have, is probably a good way to go if you start running into anything unexpected.


Docker, Gemfile, Gemfile.lock, rbenv, rubygems, bundler or why good planning can often beat evolution

Rbenv → specifies a Ruby version

Ruby → loads some gems that are part of the base Ruby distribution (bundler, rubygems, irb, etc)

Rubygems →loads other gems (for the system or user) → GEM_HOME and GEM_PATH

Bundler → loads other gems (for the current project) → BUNDLE_PATH

So basically Ruby ships with a bunch of gems, rubygems manages most of the gems you install, and bundler manages all of the gems for a specific project, and both rubygems and bundler ship with Ruby.

The initial release of Docker was done at PyCon in 2013, but I wonder how much Ruby and the various shades of gem catostrophe helped inspire Docker adoption?

If you're using Docker and thus only running a single set of gems at a time, I think you should simplify the models as much as possible. For a ruby focused docker container, just skip rbenv and similar tools, try to bypass the system gems as much as possible, and try to get bundler / rubygems to force everything into /gems.

You'll want to check what bundler and rubygems detect as the environment via:

bundler env
gems env

Basically, inside of your docker-compose.yml and Dockerfile and perhaps even your dip.yml you want to set

HOME:        /app
GEM_HOME:    /gems
GEM_PATH:    /gems
PATH:        /gems/ruby/<ruby-version>/bin:/gems/bin:/app/bin:$PATH

Also, I found my local docker build environment to be far less annoying when I figured out how to run everything as my local user... Ultimately I need to move to actually using a text template system to process all of the required variables (it's pretty lame that the Docker people make permissions that match the developer impossible without your own wrappers) but if it bothers you enough you could always fork docker.

ARG UID=<your id -u>
ARG GID=<your id -g>

RUN set -x \
    && addgroup --system --gid $GID $APPUSER || true \
    && adduser  --system --disabled-login --ingroup $APPUSER --no-create-home --home /app --gecos "$APPUSER" --shell /bin/false --uid $UID $APPUSER || true


# Why on earth does Docker's COPY always copy as root instead of the USER: you set? Only the Docker people know...
COPY --chown=$APPUSER: Gemfile* /app/

Then in your dip.yml makle sure to add run_options for your user...

    description: Open the Bash Shell in Container
    service: web
    command: bash
      run_options: ["no-deps", "user=<your id -u>"]

Setting ActionText Rich Text values for Polymorphic models in Rails 7 in the Console/IRB

Let's say you have a polymorphic model called Notes that you will be assigning to Projects and Authors, and Notes have a title and content.
Once you've edited the app/models/author.rb and app/models/project.rb to both have:

has_many :notes, as: :notable

And inside of app/models/note.rb you have:

belongs_to :notable, polymorphic: true
has_rich_text :content

(also, make sure you've done bin/rails action_text:install) then in rails console you should be able to run:

n = Project.first.notes.create
n.content.body = "<p>Some body content</p>"
n.title = "Note Title"

So the not totally obvious part is just using PolymorphicModel.your_actiontext_field.body. My first time around I named the actiontext field body, so then I had to search for body.body which was just too ugly, so I changed it to content.

Linux Windows

Sharing Windows OpenSSH keys for Linux Dual Boot

TL/DR: If you run into problems with opensshd permissions on windows, open a PowerShell Administrator prompt and run:

cd C:\ProgramData\ssh

takeown /R /F ssh_host*

icacls ssh_host* /T  /Q /C /RESET

icacls ssh_host* /grant SYSTEM:`(F`)

icacls ssh_host* /grant Administrators:`(F`)

icacls ssh_host* /inheritance:r

icacls ssh_host* /setowner system

Previously I wrote about Installing OpenSSH on Windows. For my workflow, I actually prefer to dual-boot Linux and Windows even though WSL2 has come a long way.

I use Barrier (open source successor to synergy) to share my mouse (well trackball) and keyboard across my workstation and laptop, regardless of whether Linux or Windows is running - I securely share the same underlying keys, and have the dhcp server assign a fixed IP to each MAC address.

It's actually quite tricky to get your OpenSSH keys from Linux's /etc/ssh/ssh_host_*key to C:\ProgramData\ssh\ssh_host_*key because of ACL details, even though I only edited the files with nvim - I thought that should preserve the icacls status, but it doesn't.

Windows iacls are a bit like selinux or AppArmor. Not a trivial subject, so be prepared if you're going to wade in.

iacls have inheritance, removed with /inheritance:r

For me, the most confusing thing about icacls is that if you break the permissions in certain ways (for example removing inheritance before you've granted some individual permissions to that file), you can no longer use icacls to fix them! You have to use takeown to re-assert ownership, and then you can start using icacls again.

PS C:\ProgramData\ssh> net stop sshd
The OpenSSH SSH Server service was stopped successfully.

PS C:\ProgramData\ssh> net start sshd
The OpenSSH SSH Server service is starting.
The OpenSSH SSH Server service could not be started.

A system error has occurred.
System error 1067 has occurred.
The process terminated unexpectedly.

Because I am not a master of icacls, I completely hosed my entire C:\ProgramData permissions while trying to fix ssh...

When trying to run sshd directly from the command line rather than via the windows service infrastructure, I actually got a bit more detail.

PS C:\WINDOWS\system32> sshd -dd
debug2: load_server_config: filename __PROGRAMDATA__\\ssh/sshd_config
debug2: load_server_config: done config len = 158
debug2: parse_server_config: config __PROGRAMDATA__\\ssh/sshd_config len 158
debug1: sshd version OpenSSH_for_Windows_8.1, LibreSSL 3.0.2
debug1: get_passwd: LookupAccountName() failed: 1332.
debug1: Unable to load host key: __PROGRAMDATA__\\ssh/ssh_host_rsa_key
debug1: Unable to load host key: __PROGRAMDATA__\\ssh/ssh_host_ecdsa_key
debug1: Unable to load host key: __PROGRAMDATA__\\ssh/ssh_host_ed25519_key
sshd: no hostkeys available -- exiting.

The PowerShell team provides a guide for exactly what ACL permissions are required to for your ssh_host_* files.

After several rounds of shooting myself in the foot with the not very memorable friendliness of icacls, I finally ran:

icacls "C:\ProgramData\ssh" /setowner system
icacls "C:\ProgramData\ssh" /q /c /t /reset
icacls "C:\ProgramData\ssh\ssh_host_*" /remove erwin

After that, I ran sshd -dd and finally was able to get OpenSSH to start up again on the command line without permissions errors, however running net start sshd still was failing to startup...

Turns out that running sshd -dd as just runs sshd in interactive mode under the currently logged on user (typically as an admin). To simulate the SYSTEM actually running sshd as a service you actually want to run:

psexec -s sshd.exe -ddd

(Note, psexec is part of sysinternals, probably easiest to install with shovel...)

PS C:\WINDOWS\system32> psexec -s sshd.exe -dd

PsExec v2.34 - Execute processes remotely
Copyright (C) 2001-2021 Mark Russinovich
Sysinternals -

debug2: load_server_config: filename PROGRAMDATA\ssh/sshd_config
debug2: load_server_config: done config len = 158
debug2: parse_server_config: config PROGRAMDATA\ssh/sshd_config len 158
debug1: sshd version OpenSSH_for_Windows_8.1, LibreSSL 3.0.2
debug1: get_passwd: LookupAccountName() failed: 1332.
Permissions for 'PROGRAMDATA\ssh/ssh_host_rsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
debug1: Unable to load host key "PROGRAMDATA\ssh/ssh_host_rsa_key": bad permissions
debug1: Unable to load host key: PROGRAMDATA\ssh/ssh_host_rsa_key
Permissions for 'PROGRAMDATA\ssh/ssh_host_ecdsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
debug1: Unable to load host key "PROGRAMDATA\ssh/ssh_host_ecdsa_key": bad permissions
debug1: Unable to load host key: PROGRAMDATA\ssh/ssh_host_ecdsa_key
Permissions for 'PROGRAMDATA\ssh/ssh_host_ed25519_key' are too open.

It is required that your private key files are NOT accessible by others.
This private key will be ignored.
debug1: Unable to load host key "PROGRAMDATA\ssh/ssh_host_ed25519_key": bad permissions
debug1: Unable to load host key: PROGRAMDATA\ssh/ssh_host_ed25519_key
sshd: no hostkeys available -- exiting.
sshd.exe exited on XPS with error code 1.

So even though I had fixed the permissions enough for my user to run sshd it was not enough for the system to run sshd, the way that net start sshd works.

I wasn't even able to cd C:\ProgramData\ssh so I started with:

get-acl C: | set-acl C:\ProgramData

Then when I cd into C:\ProgramData\ssh turns out that the permissions are in fact way more open than what Window's SSHD (or Linux sshd for that matter) permit.

PS C:\ProgramData\ssh> icacls ssh_host_*key
ssh_host_dsa_key NT AUTHORITY\Authenticated Users:(I)(M)
                 NT AUTHORITY\SYSTEM:(I)(F)

ssh_host_ecdsa_key NT AUTHORITY\Authenticated Users:(I)(M)
                   NT AUTHORITY\SYSTEM:(I)(F)

ssh_host_ed25519_key NT AUTHORITY\Authenticated Users:(I)(M)
                     NT AUTHORITY\SYSTEM:(I)(F)

ssh_host_rsa_key NT AUTHORITY\Authenticated Users:(I)(M)
                 NT AUTHORITY\SYSTEM:(I)(F)

Successfully processed 4 files; Failed processing 0 files

So the easy way to do this on Windows is just to focus on one file at a time... We know that sshd complained about ssh_host_rsa_key first, so we'll start there.

icacls .\ssh_host_rsa_key /inheritance:r

So this removed the inheritance ACL from that file, and it's basically impossible to re-add...

Windows improbable answer for removing inheritance from a single file is to use takeown

takeown /R /F C:\ProgramData\ssh

# Then reset the ACLs to their default values

icacls C:\ProgramData\ssh /T /Q /C /RESET

After takeown runs, you'll be able to fix all the permissions again, but all the permissions will be messed up requiring them to be fixed 😉

Now we'll try again:

icacls.exe .\ssh_host_rsa_key

.\ssh_host_rsa_key NT AUTHORITY\Authenticated Users:(I)(M)
                   NT AUTHORITY\SYSTEM:(I)(F)

Now I'm first going to explicitly grant Full Control to System.

icacls.exe .\ssh_host_rsa_key /grant SYSTEM:`(F`)
icacls.exe .\ssh_host_rsa_key /grant Administrators:`(F`)

Then I'm going to remove Inheritance

icacls.exe .\ssh_host_rsa_key /inheritance:r

However, unfortunately this still doesn't work... Even though this gives us the exact values that the PowerShell team documents as required:

ssh_host_rsa_key BUILTIN\Administrators:(F)

When starting up sshd with psexec we'll still get the error:

Permissions for '__PROGRAMDATA__\\ssh/ssh_host_rsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
debug1: Unable to load host key "__PROGRAMDATA__\\ssh/ssh_host_rsa_key": bad permissions
debug1: Unable to load host key: __PROGRAMDATA__\\ssh/ssh_host_rsa_key

After far too long of a detour today, I finally solved it with:

cd C:\ProgramData\ssh

takeown /R /F ssh_host*

icacls ssh_host* /T  /Q /C /RESET

icacls ssh_host* /grant SYSTEM:`(F`)

icacls ssh_host* /grant Administrators:`(F`)

icacls ssh_host* /inheritance:r

icacls ssh_host* /setowner system

Unlike Linux, the parent directory permissions don't seem to matter.

Now, net start sshd works perfectly 🙂

Docker Rails

Docker for Rails Developers in 2022

The pragprog team has a great book called "Docker for Rails Developers" however it's about 3 year old now... So just some minor notes here on how to make everything work right in Ruby 3 / Rails 7 world.

Docker is primary designed for deploying finished images of your software to production. Docker can work great in development, but just keep in mind that it's not really designed for development, so you'll always have to jump through a few extra hoops in exchange. Things like sharing files as your building an image and caching to speed up development builds are extra complicated. If you've ever setup a chroot environment, it's not as simply as just including the stuff you directly want to use --- there's overhead.

As I'm writing, Rails v7.0.3 with Ruby 3.1.2 is current.
Yet this book is based on Rails v5.2.2 and Ruby 2.6.

Docker makes these kind of version changes super easy. I recommend just sticking with exactly what the book uses and explicitly setting:

gem install rails -v 5.2.2

If you keep rails on v5.2.2 then all of the rest of the instructions should work, and when you start the rails server with:

docker run -p 3000:3000 ${my-image-id} bin/rails s -b

You should get the Rails 5 welcome page...

Your other also very easy option is to just switch to a Ruby 3.1.2 based image... So in your Dockerfile just use:

From: ruby:3.1.2
# instead of From: ruby:2.6

The examples are so simple from a Ruby perspective that you should be fine either way. Always comforting when your screenshots exactly match the authors though, so I would stick with Rails 5.2.2 and Ruby 2.6 and work though this short book.

Highly recommend this one though. Changed the way I think about Docker, especially all the docker-compose stuff.

Once you get to Chapter 7, Playing Nice with JavaScript, the Ruby 2.6 instructions really start to break down. Node 8 and Node 10 are deprecated, so you'll need to move to Node 12, and the default packages for that aren't compatible with the Debian base image that Ruby 2.6 is built on.

With Rails 7, we've kind of moved on from WebPacker anyway, but if you really want to do that part, I got it to build fine by using this as my Dockerfile.Basically, set ruby to 3.1.2, use setup_12.x for Node JS AND make sure to generate your rails app with Rails 7.

❯ cat Dockerfile        
FROM ruby:3.1.2

RUN apt-get update -yqq &&\
    apt-get install -yqq --no-install-recommends \

RUN curl -sL | bash -

RUN curl -sS | apt-key add -
RUN echo "deb stable main" | \
    tee /etc/apt/sources.list.d/yarn.list

RUN apt-get update -yqq &&\
    apt-get install -yqq --no-install-recommends \
    apt-utils \
    nodejs \
    yarn \

COPY . /usr/src/app

WORKDIR /usr/src/app
RUN bundle install

Webpacker comes up again in Advanced Gem Management and finally in Chapter 13, A Production-Like Playground.

The Rspec/testing chapter doesn't use Node or webpacker, so you can tear the yarn and nodejs update stuff out of your Dockerfile and get rid of Webpacker in your Gemfile.

FROM ruby:2.6

LABEL maintainer=""
LABEL foo="bar"

RUN apt-get update -yqq &&\
    apt-get install -yqq --no-install-recommends \
    apt-utils \
    nodejs \

COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN bundle install

COPY . /usr/src/app/

CMD ["bin/rails", "server", "-b", ""]

The automated testing with Capybara/Selenium didn't work for me, but I've done some browser automation with Goland and Chrome so I didn't want to spend a lot of time debugging it. If you want to do the RSpec/Capybara test examples in 2022, probably best to reference this post instead:

The authors tutorial approach of working though each step a piece at a time was extremely helpful for me, even being fairly familiar with Docker, but passively as a developer/user without having been excited enough to deep dive into Docker itself.


How to completely remove Snaps? Install voidlinux

Ubuntu Linux is on a dark path these days. I used Debian on a lot of servers in the early 2000s, and like RedHat Enterprise, the Debian Stable stuff is just too old... So having a team "stabilize" Debian's Unstable distribution every six months was a fantastic idea and it worked well for a long time.

Unfortunately there is a big disconnect between who the Ubuntu team wants to serve and it's regular users. Sadly, Wayland seems to have this same disconnect, but that's another rant.

Snaps are just so awful. They break your themes, break your plugins and thus your workflow, block your access to files, clutter up your process list and mount list. Containerization is a great idea AS LONG AS you don't reduce function... So far, containerization on the desktop is not working. I would much rather use a heaver QubesOS like model rather than kind of working but breaking tons of advanced use cases.

voidlinux is an awesome alternative to Ubuntu:

  • Gets rid of Systemd, using very simple runit scripts for services
  • Void is not a FORK of anything. It's a new Linux developed from scratch.
  • It's a "Stable rolling release" distro, while Arch just focuses on Bleeding Edge rolling releases (that occasionally blow up in my face)
  • Easily install any Debian package with xdeb
    • I used it for Zoom, Opera and Jumpapp
  • Enjoy the super simple xbps package system
    • sudo xbps-install -Svu to update the cache the first time
    • sudo xbps-install etckeeper to install etckeeper
    • sudo xbps-install xtools to install xlocate
  • xlocate is a very fast replacement for apt-file search, just type xlocate '/xlocate$' it accepts any regex and will help you find any file inside of any xbps package, even the ones you haven't installed.
  • void is a minimal distro without any bloat

To get the install started, you just download the voidlinux ISO, which will startup a voidlinux live image. Open the terminal and run sudo void-installer - the password is "voidlinux".

That's basically it. Once you're booted into your own voidlinux install, install xtools and run xlocate to find whatever you're missing. Here's my list of daily use tools if you need something to help get started.

# Refresh the XBPS-Cache
sudo xbps-install -Svu

# Always start with etckeeper
sudo xbps-install etckeeper

# Install generic utilities
sudo xbps-install htop neofetch python3 sqlite kitty kitty-terminfo 
sudo xbps-install ddcutil mlocate ntp
sudo xbps-install tree ranger tmux screen direnv mosh fzf fzy
sudo xbps-install bind-utils git curl wget whois netcat socat
sudo xbps-install flameshot zsh xtools syncthing cheat sxhkd
sudo xbps-install rofi aws-cli zathura tdrop mpg123 mp3info 
sudo xbps-install xsel xmodmap xev xrandr wmctrl jump
sudo xbps-install binutils tar curl xbps xz bash-completion
sudo xbps-install base-devel

# KDE Plasma Desktop
sudo xbps-install plasma-desktop
sudo xbps-install kde5-baseapps kde-cli-tools qt5-tools
sudo xbps-install kdegraphics-thumbnailers ffmpegthumbs kwallet-pam
sudo xbps-install ksshaskpass kwalletmanager 
sudo xbps-install pavucontrol-qt pavucontrol

Linux X11

Adding a `hyper` modifier key to your keyboard, because shift/ctrl/alt/meta/super aren’t enough

TL/DR: Try this in your ~/.Xmodmap log out of X11 and log back in. If you moved to Wayland, hopefully one day we'll get stable APIs for 3rd party window utilities. Until that happens, I recommend switching back to X11 if you can.

clear mod1
clear mod2
clear mod3
clear mod4
clear mod5

keycode 248 = Hyper_L

add Mod1 = Alt_L Alt_R
add Mod2 = Num_Lock
add Mod3 = Hyper_L
add mod4 = Super_L Super_R
add mod5 = ISO_Level3_Shift Mode_switch

Note that you can't blindly copy code 248... You need to find a key that will work for your keyboard.

Soft fn key on keyboard

Use xev -event keyboard to find the keycode for the key that you want to use. While xev is running, focus on the xev window and press the key you want to use. You'll see the KeyRelease event show on your console. Best to use the xev -event keyboard filter so that all mouse events can be ignored.

In my example, this is AFTER I've use Xmodmap to map this key to Hyper_L... You won't see this until your remapping is successful.

You need to make sure Xmodmap will actually start with your widow manager, and for me, it works best with a delay. So I have a file ~/.config/autostart/xmodmap.desktop

[Desktop Entry]
Comment=Xmodmap Keyboard Modification

And then delay_xmodmap is a trivial script that just sleeps for a bit before launching.


sleep 10;
xmodmap ~/.Xmodmap

Using a hot key manager like sxhkd you should be able to map shortcuts to the Hyper key now. Put something like this in your ~/.config/sxhkd/sxhkdrc to test that Hyper is working.

hyper + {a-z}
        notify-send "hyper {a-z}"

Every keyboard I've ever seen has a SHIFT key and CONTROL keys. Historical Unix vs Mac vs Windows are a bit different for the other keys:
Alt == Meta (Option in the Mac World)
Windows Key == Super Key == Command Key

So you basically get 3 modifiers on each key, but SHIFT is so common for text it's dangerous for most of us to configure that.

If you're on Linux, and you have a keyboard that generates an actual "keycode" (some fn labeled keys actually run software in the keyboard to send a different keycode to the OS, without the OS knowing that a modifier was involved).

mod1AltOptionAlt_L Alt_R / Meta
mod2Num LockNum Lock
mod4Windows KeyCommand KeySuper_L Super_R
X11 mod1 keys