What are _GTK_FRAME_EXTENTS and how does Gnome Window Sizing work?

Gnome is a trailblazer. Desktop icons? Using the Win/Cmd/Super key as a shortcut key? Getting rid of backwards compatibility? The Gnome developers are always eager to try new things…

There was once a time when the KDE (kwin) folks and the GNOME (mutter) folks appeared to get along quite well, and jointly created several versions of the wm-spec of the Specifications. Collectively, the FreeDesktop specifications are why application developers can create apps that work across various types of Linux desktops.

Many parts of the spec are frequently updated, but sadly the wm-spec portion of the specifications hasn’t gotten gotten an update since November, 29th 2011.

The wm-spec defines “Extended Window Manager Hints”. Messages that applications, window managers, pagers (navigation aids like virtual desktop selectors), and automation applications such as xdotool and wmctrl.

The wl-roots project that provides the foundation for SwayWM, basically the Wayland port of the i3 tiling window manager, may eventually become a sort of successor to wm-spec, giving other desktops a common platform to build for Wayland on top of, however, I don’t think GNOME or KDE are building on top of wl-roots. The future best case scenario may be code that works for wl-roots, KDE and GNOME… So for anyone that thought using xcb, the C interface directly to the X11 protocol, was difficult, there’s not even a consistent layer to write to once everything moves to Wayland. Hopefully Arcan or something else comes out and surprises us, as I think Wayland could be a significant step backwards for the most sophisticated users. That said, the WL-ROOTS author seems to be optimistic on Wayland…

X11 has been updated periodically since it’s initial release in 1987. Lots of things change over the years… The MOTIF_ related proprieties aren’t used very often any more.

You can read all of the properties for any window you’re interested in with xprop. For example:

# use -len 80 to drop X11's archaic icon info
xprop -len 80

Prior to _GTK_FRAME_EXTENTS, the next most recent window sizing method was to check _NET_FRAME_EXTENTS.


left, right, top, bottom, CARDINAL[4]/32

The Window Manager MUST set _NET_FRAME_EXTENTS to the extents of the window’s frame. left, right, top and bottom are widths of the respective borders added by the Window Manager

EWMH (Extended Window Manager Hints) spec

GNOME released a new property called _GTK_FRAME_EXTENTS that usually have a value close to 25, 25, 25, 25… In 20202, KDE’s KWin window manager was updated to automatically handle _GTK_FRAME_EXTENTS, so if you’re a user or developer who’s trying to understand how this property works on Kwin, it’s even more confusing, because Kwin takes these values into account, and then gets them out of your way. This is the ideal behavior. Congratulations to the Kwin team. However, don’t expect to understand this property if you’re using a version of Kwin that already effectively hides it.

I had been searching for documentation on how exactly `_GTK_FRAME_EXTENTS` worked for a few weeks, but on KDE I just couldn’t make any sense of it. I even found this video and though “isn’t that just doing what it was supposed to do? what did it look like before?”

Today when I was using Gnome for a while, it finally all came together.

# &! to launch and disconnect from the terminal
gnome-calculator &!
kcalc &!

# give your windows a moment to open
sleep 1

# -x to use a window class, not window title
# 100,100 should now be top left of window (X,Y)
# 300,300 is less than window minimum, ignored
wmctrl -x -r "gnome-Calculator" -e 0,100,100,300,300
wmctrl -x -r "kcalc"            -e 0,100,100,300,300

Ideally, these two windows would now be in the same X, Y position… Just looking at this image (taken on Gnome on X11) and you can probably figure out how _GTK_FRAME_EXTENTS works.

Here’s a screenshot of gnome-calculator with a 10×10 pixel grid in the background…

And when we run xprop on this gnome-calculator window, we get:


Just to make sure that we don’t have any ambiguity, let’s try the same thing on kcalc and see what we get.


🎁 Putting a bow on it…

So GTK creates a large buffer around the edges of every window. I presume this could be used for dropshadows and other eyecandy. As a result, window is actually much larger than you expect the window to be. The _GTK_FRAME_EXTENTS values (Left, Right, Top, Bottom) communicate how much of the window should be cropped off when considering things like window “snap to edges”.

Meanwhile the KDE Kcalc window is using _NET_FRAME_EXTENTS (Left, Right, Top, Bottom), and the actual Window that you would use for placement and alignment extends that much BEYOND the size of the window.

_NET_FRAME_EXTENTS tell you how much EXTRA SPACE add to your Window Size calculations.

_GTK_FRAME_EXTENTS tell you how much EXTRA SPACE to REMOVE from your Window Size calculations.

Here I wrote a Xlib program that just displays a 3pixel, red frame window in the top left, starting at point 0,0. It’s 100 x 100 pixels.

You can see, the top left of the Kcalc window (x=100, y=100) is just inside of the titlebar.

Meanwhile, the top left of the of the Gnome-Calculator window using _GTK_FRAME_EXTENTS is actually fully outside of the GTK window..

_GTK_FRAME_EXTENTS should work the same on Wayland, but xprop and wmctrl surely will not since they’re directly based on X11.

Hopefully Wayland will soon have command line tools like wmctrl and xdotool and hotkey tools like sxhkd that can work on any Linux or BSD desktop environment on top of Wayland…


Adding a bit of security to your certain SFTP connections

So be sure when you step, Step with care and great tact. And remember that life’s A Great Balancing Act.

Theodor Seuss Geisel

Security, like the rest of life, is a great balancing act. A rock on the bottom of the ocean is about the most secure thing in the world, but it’s not terribly useful… There are always tradeoffs.

Generally, FTP should be avoided, because with normal FTP your passwords will be sent over the wire in plaintext and vulnerable to replay attacks. All of your data will also be sent over the wire in plaintext, so adiós to any sense of confidentiality for the data you sent that way. There are still times with FTP makes sense to use though, most especially over very lossy, high latency networks, when transferring content that is already public, like static website resources. If you’re forced to use FTP, and you will do so repeatedly, make sure to choose a client and server that at least encrypt the authentication.

On any network connection that isn’t the equivalent of institutional Grade D beef, you should be using SFTP (SSH File Transfer Protocol) or FTPS (FTP over TLS/SSL).

Sometimes you have a single User Account that you want to share with multiple devices. For example, maybe you want your Tablet to connect to SFTP to your Linux box, but you don’t want that key to have full shell access.

Sometimes you should use the “full solution” of creating a separate user account, giving that user a locked down, perhaps chrooted shell, but then you need to figure out the user/group permissions of the files in question, how to maintain those permissions over time, and whether those permissions changes will be compatible with the rest of the required workflow.

There’s a little used, not widely understood feature of SSH’s ~/.ssh/authorized_keys file, you can prefix each key with:

# authorized_keys "command" example...
command="/usr/local/bin/file-transfer-only" ssh-rsa AAAA...

Then for /usr/local/bin/file-transfer-only you could put something like:


    exec /usr/lib/openssh/sftp-server
    echo "Access Denied"

Most Android “SFTP File Transfer” interfaces like Solid Explorer or the Fx File Manager use /usr/lib/openssh/sftp-server to get the SFTP transfer started, while most command line users will use scp or rsync to move files around.

This can be a nice in-between solution. Far better than sharing an SSH key between multiple devices (because you can easily remove it if any device/user is now gone, accountability, etc) but also a far simpler solution than a whole new account.

It ultimately depends on what you’re doing.

You could even experiment with doing a chroot in this script to further tie things down, though keep in mind the chroot environment usually needs several things mounted for kernel access.

If you have any problems getting this setup, just turn on debugging on your SSHD server, and watch the log on your SSHD server, and you should be able to figure it out if you read carefully.

# /etc/ssh/sshd_config
# Change LogLevel
LogLevel DEBUG

# Save and Quit
systemctl reload ssh

On Debian, the SSHD logs will be under:


On RHEL the’s are /var/log/secure

If it’s neither of those, you might be stuck using journalctl to access the sshd logs.

If you turned on DEBUG level logging, don’t forget to turn it back off.

Enjoy your Linux adventures!


Vim one line file headers and footers

For more than 20 years I’ve been coming across files that have a headers or footers like:

$OpenBSD: sshd_config.5,v 1.45 2005/09/21 23:36:54 djm Exp$

/* vim: set ts=8 sw=4 tw=0 noet : */

Yet I’ve never found exactly where/how either of these tags are generated… Perhaps it just never got my attention enough. Well, all of that changes now.

The rcs (GNU RCS revision control system) first released in 1991, includes a command called ident. The ident command manual page starts:

       ident - identify RCS keyword strings in files

       ident [ -q ] [ -V ] [ file ... ]

       ident  searches for all instances of the pattern $keyword: text $ in the named files or, if no files are named,
 the standard input.

Not many people use rcs these days, but it’s also possible to do this with other version managers. The key is to know that a “$…$” string is called an “Ident String“.

#include <stdio.h>
static char const rcsid[] =
  "$Id: f.c,v 5.4 1993/11/09 17:40:15 eggert Exp $";
int main() { return printf("%s\n", rcsid) == EOF; }

If you want to setup ident strings with git, you can use:

$ echo '*.txt ident' >> .gitattributes
$ echo '$Id$' > test.txt
$ git commit -a -m "test"

$ rm test.txt
$ git checkout -- test.txt
$ cat test.txt

Note that before 2020, the default capability of “$ident$” strings in git was quite limited. Fortunately, “filters” have been extended to provide a lot more information, so you should be replicate ident in git now.

See this helpful note on Stack Overflow summarizing the 2020 git ident related changes.

What about the Vim footers?

Often times you come across a file with something like the following at the top or bottom

// vim: noai:ts=4:sw=4
/* vim: noai:ts=4:sw=4
/* vim: set noai ts=4 sw=4: */
/* vim: set fdm=expr fde=getline(v\:lnum)=~'{'?'>1'\:'1': */

These are called “vim modelines“. The modeline can be within the first or last 5 lines in your file.

Best to only trust modelines on files where you trust the authors (like yourself).

Unfortunately modeline has been abused before. There are at least 5 CVE related modeline exploits over the years, so you’re probably better off using “EditorConfig” for shared projects.


How fast is Assembly? How slow are C and Go?

TL/DC Jump to the Test Results

I started my tech life working with FoxPro and the BASIC language… I wrote in SuperBase, Visual Basic, Perl, Ruby and Shell Scripts. All very high level languages. I always wanted to learn what was truly happening inside of the machine, but I was either too intimidated or too busy to learn low level languages. I moved into management, sales and business, and would code a bit over the years, but I had become a generalist.

A few years back I started getting back into writing software again. I looked at Perl 6. That’s interesting, but why? I looked at some of the newer JavaScript and TypeScript, but none of it satisfied my curiosity. I wanted to know what’s really going on. Something lower level and faster!

I started writing a fair amount of Go. It’s a pretty nice language. Very easy. If I were writing a large app with several developers, I think Go may be a good choice. Everything is simple, compact, and standard. The more I got deeper into Go, the more I would see that I still didn’t understand what’s happening in the machine. I was reading something like “Advanced Go” that was talking about how to connect Go to C and I though, why am I doing this instead of just writing in C?

I had written a little bit of C in school, but never got very good at it. At that time, the money I could make solving other people’s problems in high level languages was more interesting to me than understanding what was actually happening inside the machine.
Head First C – Chinese Edition… I actually read the English version, though this one might be more fun

I found a great C book… Head First C. I wish my college had used this as the C textbook instead of the dry, sleep inducing tome the professors chose. I started writing C and enjoying it, but I had a lot of problems solving some of the memory errors, especially at first. You could say that I hit these problems because I didn’t have very much experience in C. I would say I hit them because I had finally gotten closer to the processor, but I still didn’t know what’s going on.

So I thought to myself, a simple C introduction wasn’t difficult at all. It was fun and enjoyable. What if there’s something similar for Assembler?
Assembly Language Step-By-Step

So I found Assembly Language: Step-By-Step. The author’s idea is to teach Assembly Language as a first programming language. It’s an interesting idea.


The blog post title… How fast is Assembly, and how slow are Go and C? The thing to remember with ASM is that each line of your assembly code corresponds to object code that you’ll feed the processor.

SECTION .data			; init data

	HelloMsg: db "Hello world!",10 ;
	HelloLen: equ $-HelloMsg ;

SECTION .text			; code section

global _start  ; link needs this to find the entry point

	mov eax, 4        ; sys_write syscall
	mov ebx, 1        ; stdout (file 1)
	mov ecx,HelloMsg  ; pass offset of message
	mov edx,HelloLen  ; pass length of message
	int 80H           ; syscall

	mov eax, 1        ; exit syscall
	mov ebx, 0        ; return 0
	int 80H           ; syscall 

Assuming code is in a file named asm-hello.asm, run the following to build and test (assuming you’re on a 64 bit machine)

nasm -f elf64 asm-hello.asm && \
ld -o asm-hello asm-hello.o && \

If you’ve never written assembler before, and you’re not quite sure what any of that means… eax ebx ecx and edx are registers. You trigger the syscall with interrupt 80H (code 128). If you want a better explanation of how and why it works, read the book linked above.

Performance Testing with perf

The interesting thing about this, is that I don’t think you could make a hello world program significantly shorter.

sudo perf stat -r 1000 -d ./asm-hello

For this tiny asm example, you can see it took 0.00029 seconds to execute.

Total execution time: 0.000296 seconds

For the C version, I set -Ofast for maximum optimization. On such a simple program I don’t think the optimization made any difference.

C version with -Ofast, Total execution time: 0.000498 seconds
Go version, Total execution time: 0.000984 seconds
Perl version, Total execution time: 0.00135

Of course it’s very possible that you could write awful ASM and either never get it to work or get it to work so bad that it’s even slower than any of the other options. It’s just interesting to see how much different the speed can be. If you do have a part of a Go program that’s too slow, it makes a lot of sense to move that part to C. If you have part of a C program that’s too slow, it makes a lot of sense to move that part to ASM.

This clarifies for my why old programs on old hardware were so fast! Because RAM and CPU were both extremely expensive, lots of things were written in ASM.

Now that RAM and CPU prices are so low, most RAM segments and CPU cycles are thrown away on things like Electron