rycwoNotes on graphics programmingZola2020-09-09T00:00:00+00:00https://rycwo.dev/atom.xmlForge Dev Log 3: IMGUI 2D Canvas2020-09-09T00:00:00+00:002020-09-09T00:00:00+00:00https://rycwo.dev/blog/forge-dev-log-003-imgui-canvas/<p>Day three! What a day! I finally managed to move past my writer's block — if you
can call it that — and make good progress on the canvas layout for Forge's IMGUI
as I had been wanting to. It's not a whole ton of work, but to paraphrase a good
friend of mine: when it comes to personal projects, the "initial inertia" is
the hardest bit. So I'll pat myself on the back and take this small win.</p>
<p>Rather than starting with a wall of text, I figured it would be much more
interesting to see the layout in action. Apologies in advance for the jittery
mouse movement, some evenings I work best off-desk so I was stuck with the
trackpad for navigation!</p>
<p><video src="https://files.rycwo.dev/borann_d9173fd.mp4" muted controls></video></p>
<h1 id="canvas-components">Canvas components</h1>
<p>Although it may not be obvious, the node graph pretty-much demonstrates all of
the IMGUI framework's basic systems working together. The dummy nodes, the rects
with the coral outlines, are positioned using <code>set_next_gui_position()</code> as
demonstrated in <a href="https://rycwo.dev/blog/forge-dev-log-002-imgui-intro/">Day 2</a>. The canvas layout itself is special in that it
does not dictate the precise position of the elements. Instead, it manages a
transformation matrix that transforms any elements drawn within its scope. The
matrix components are manipulated by mouse input.</p>
<p>I introduced the <code>begin_gui_transform()</code> and <code>begin_gui_clip()</code> functions in
order to set the active transformation matrix and clip rectangle respectively.
Both of these functions simply push data into a buffer which is then taken into
account in the shaders used to draw the GUI elements. Forge's IMGUI supports any
number of shaders to allow for complex GUI rendering if every necessary and it
is up to the developer to ensure the shader respects the transform/clip buffers.
More on the GUI shaders another day.</p>
<p>The implementation of the dots on the grid was something I mulled over for
quite a while. In case you're not already aware, I tend to overthink solutions
to simple problems. It gets even worse when it's for a personal project I care
about! Ultimately the decision came down to whether I should render all the dots
on a single rect via a custom fragment shader, or whether I should push a
handful of GUI elements using the existing circle primitive. Bearing in mind a
custom shader would mean an additional draw call just for the grid and almost
all of the fragments will be transparent, I opted to just push each dot as a
separate GUI element. Thankfully, buffer memory both on the CPU and the GPU is
pre-allocated in a <a href="https://en.wikipedia.org/wiki/Slab_allocation">slab-like</a> manner on IMGUI initialization so we
can feel confident in the rapid creation of many GUI elements. Once again, I
will defer any lengthier discussion of IMGUI's memory allocation patterns to
another time.</p>
<h1 id="pan-zoom-transformation">Pan/zoom transformation</h1>
<p>Surprisingly, the implementation I struggled with most was the zoom behavior of
the layout. The pan was trivial, it was the zoom specifically that caught me off
guard. My mistake was in trying to approach the solution solely by thinking of
the elements being transformed within the canvas. While mathematically it does
boil down to doing just that, it helped immensely to frame the problem as a 2d
camera problem. With this in mind, a couple of points became clear:</p>
<ul>
<li>Scaling should be done about a pivot centered on the canvas container.</li>
<li>Scaling towards the mouse position is a common behavior. We need to translate
the view as we are scaling so that at some maximum scale the mouse position
is at the center of the view.</li>
</ul>
<p>It then became trivial to build a suitable transformation matrix \(C\).</p>
<p>\[C = S_pSS_p^{-1}T\]</p>
<p>Where \(S_p\) is the scale pivot, and \(S\) and \(T\) are the scale and
translation respectively. The key insight here is that unlike a regular object
transformation, we want to first translate, <strong>then</strong> scale, so that the view
behaves like a camera zooming in/out of objects that have already been moved in
space.</p>
<h1 id="what-next">What next?</h1>
<p>Maybe the solution was pretty obvious, in any case, it works well and I am
happy. There are some other things, however, that are still bugging me hard.
You may have noticed the outline on the rects are looking a bit ugly, they're
missing a certain pixel-perfect crispness to them. The dots on the grid, in the
meantime, are <em>supposed</em> to be beautiful anti-aliased circles. It is pretty
clear I will be knee-deep in shader programming for the next few days. I have my
sights set on nailing the shaders for the primitives so I will not have to visit
them again for a long while.</p>
Forge Dev Log 2: Intro To IMGUI2020-09-06T00:00:00+00:002020-09-06T00:00:00+00:00https://rycwo.dev/blog/forge-dev-log-002-imgui-intro/<p>With the first day behind me, I look onwards to the path ahead of me, and I am
reminded of why I'd lost momentum in the first place. I had gotten wrapped up
in a painfully basic problem — trying to create a pan/zoom behavior on the
canvas layout the Immediate-Mode Graphical User Interface (a.k.a. IMGUI)
provides.</p>
<h1 id="imgui-design">IMGUI design</h1>
<p>To understand the layout problem, I feel it conducive for today's post to
briefly explain the design principles driving the development of Forge's IMGUI
framework. Hopefully, it will be somewhat interesting to someone out there!</p>
<p>An IMGUI, as I understand it, aims to first and foremost <strong>eliminate state
synchronization</strong>. State synchronization is often required in GUI frameworks
where the state of any represented data is cached as part of the displayed
interface objects. With Qt, for example, a signal is emitted when an underlying
data model is updated, so that the view of the data knows it needs to update
itself visually to reflect the changes. In principal this all sounds pretty
sane, but in practice, it often becomes cumbersome to manage the communication
between the data and its corresponding view (or view<strong>s</strong> as there may
potentially be multiple simultaneously active views of the same data). For the
uninitiated, there is <a href="http://www.johno.se/book/imgui.html">plenty</a> <a href="https://caseymuratori.com/blog_0001">of</a>
<a href="http://sol.gfxile.net/imgui/">existing</a> <a href="https://github.com/ocornut/imgui/wiki#About-the-IMGUI-paradigm">literature</a> that addresses the benefits of
designing a GUI framework in an immediate-mode manner. Let's not waste precious
words trying to convince you that IMGUI is the way to go — although it really
is.</p>
<p>The design of Forge's IMGUI is loosely inspired by existing solutions such as
<a href="https://github.com/ocornut/imgui">Dear ImGui</a> and <a href="https://github.com/Immediate-Mode-UI/Nuklear">Nuklear</a>. With additional pointers from
the <a href="https://ourmachinery.com/post/one-draw-call-ui/">OurMachinery blog</a>, my goal is to minimize the
performance impact of the GUI within interactive applications and reserve
processing power for actual business operations. The GUI should be snappy,
simple, and most importantly: be able to run on low-end "toasters" to
accommodate users with hardware limitations, yet also reward users who have
powerful workstations. This is a sticking point for Forge in general.</p>
<p>Unlike Dear ImGui, the API is intended to be more atomic and robust. Providing
UI elements that can be composed together in any desired layout, hopefully
reaching a similar level of expressiveness as HTML and CSS. That being said
there are some decided "limitations" — such as not supporting overlapping
translucency — that I will perhaps uncover another time. In any case, the
"limitations" generally discourage what I consider to be bad UI design
practices, so they are acceptable.</p>
<h1 id="api-examples">API examples</h1>
<p>As mentioned, the API aims to provide atomic elements that together build-up
more complex behaviors. Creating a box element with a button on top of it is
relatively simple.</p>
<pre data-lang="c" class="language-c "><code class="language-c" data-lang="c">gui_rect(
context,
&(struct gui_style){.color = {0.4, 0.4, 0.4, 1.0}},
(vec2){400.0, 400.0});
struct gui_button_style const button_style = {
.style[GUI_BUTTON_STATE_NONE] = {.color = {1.0, 0.0, 0.0, 1.0}},
.style[GUI_BUTTON_STATE_HOVER] = {.color = {0.0, 1.0, 0.0, 1.0}},
.style[GUI_BUTTON_STATE_ACTIVE] = {.color = {0.0, 0.0, 1.0, 1.0}},
.states = GUI_BUTTON_STYLE_HOVER | GUI_BUTTON_STYLE_ACTIVE
};
bool const pressed = gui_button(
context,
hash_string("my_button"), // uint64_t unique id
&button_style,
(vec2){256.0, 64.0},
0);
if (pressed) {
// Do something!
}
</code></pre>
<p><video src="https://files.rycwo.dev/borann_imgui_ex_01.mp4" muted controls></video></p>
<p>Bear in mind most of the verbosity currently lies in styling the GUI elements
and can be vastly minimized with presets or external configuration. This is an
approach to API design I feel to be quite empowering. By keeping the foundation
flexible, developers are free to impose restrictions at higher levels of the
API. I also particularly like that GUI elements do not have to live in some
prescribed "root" window, unlike other IMGUI libraries. What you type is what
you get!</p>
<p>Elements can be laid out with a few basic functions.
<code>push_gui_layout_container()</code> for example, sets the space in which proceeding
elements will be positioned. This can be ignored by setting the <code>absolute</code>
parameter to <code>true</code> in any of the layout functions.</p>
<pre data-lang="c" class="language-c "><code class="language-c" data-lang="c">push_gui_layout_container(
context,
(vec4){32.0, 32.0, 800.0, 600.0}, // x, y, width, height
false);
// The next GUI element will be positioned relative to the container
// origin (32, 32).
set_next_gui_position(context, (vec2){0.0, 0.0}, false);
gui_rect(...);
set_next_gui_position(
context,
(vec2){20.0 * cosf(time), 20.0 * sinf(time)},
false);
gui_button(...);
pop_gui_layout_container(context);
</code></pre>
<p><video src="https://files.rycwo.dev/borann_imgui_ex_02.mp4" muted controls></video></p>
<p>Any fancier layout functionality just does a bit of arithmetic to figure out
sizes and spacing for upcoming elements. Currently on the roadmap are a
flex/flow layout — not unlike CSS flexbox — and a grid layout.</p>
<p>Aside from the absolute basics, I have been working on a canvas layout. The
intention is to use this as a generic basis for node graphs with pan/zoom
functionality. In the spirit of keeping these posts manageable, however,
the implementation details for the canvas shall be deferred to another day.
Until then, happy hacking!</p>
Forge Dev Log 1: A Gentle Start2020-09-01T00:00:00+00:002020-09-01T00:00:00+00:00https://rycwo.dev/blog/forge-dev-log-001-gentle-start/<p>The other night, prompted by my partner's recommendation, we watched
<a href="https://en.wikipedia.org/wiki/Julie_%26_Julia"><em>Julie & Julia</em></a>, dir. Nora Ephron. I thoroughly enjoyed the
film — although I was expecting as much considering the narrative revolved
around food! I particularly found joy in Meryl Streep's exaggerated depiction of
American chef Julia Child (if you haven't already seen a clip of her on the
internet, <a href="https://www.youtube.com/watch?v=M9AITdJBTnQ">enjoy</a>). The biopic is
loosely based on an undertaking of the writer Julie Powell. Her mission was to
combine the two passions of her life: writing and cooking. She set out to cook
all 524 recipes from Julia Child's book <em>Mastering the Art of French Cooking</em>,
in a mere 365 days, churning out daily blog posts to document her progress. What
becomes clear, however, is that her mission simply acted as a vehicle for her to
"self-therapize" through blogging, the cooking itself serving as a
prompt/conversation starter for the day's blog post.</p>
<p>I have come away from this film re-invigorated and inspired to continue a
personal project that had gone stale for the past couple of months. In addition
to chipping away on said project, I had already been considering documenting my
thought process via this blog. So the motivations are clear, and I need say no
more. With the green flag waving, fingers screeching across the keyboard, we
crawl into day 1 of many!</p>
<h1 id="the-project">The Project</h1>
<p>Let's start slow. What is this project I have been working on? One Christmas a
few years back I received a physical copy of what is often referred to as the
"rendering bible", <a href="https://www.pbrt.org"><em>Physically Based Rendering</em></a>. From that day on, I
made it a personal goal to completely consume all the delicious nuggets of
knowledge from the book and <strong>write one of the best open source production
renderers in the computer graphics industry</strong>. Ambitious! I know. Yet anyway,
here we are. The title of the very first working prototype is designated as
<a href="https://git.sr.ht/~rycwo/redplanet/commit/70a9356f67b4e3bf248c9f2bad15cfd500d209b2"><strong>redplanet</strong></a>. I long for the day when I can announce its
release.</p>
<p>A couple of years later and the project had spun itself into a much larger web
than I had initially anticipated. With my interests darting here and there, I
eventually convinced myself it is an absolute necessity to build a foundation
library with basic data structures, allocators, fixed-size linear algebra types,
a bare-bones immediate-mode GUI framework, and a concurrent in-memory data
model. Before I knew it, I was writing the beginnings of a modular game engine,
and it is this spin-off in particular, that I have made the most progress on.</p>
<p>The library is called <a href="https://git.sr.ht/~rycwo/forge"><strong>Forge</strong></a>. It currently possesses at most 20% of
the features I mentioned. Here is the immediate-mode GUI in action.</p>
<p><video src="https://files.rycwo.dev/borann_db82840.mp4" muted controls></video></p>
<p>So there we have it. The short term plan is to fix a few existing bugs with the
GUI framework, implement text rendering, and demo some fancier widgets. Only
then will I be able to re-focus my efforts onto redplanet, hopefully utilizing
relevant parts of Forge as I go.</p>
<p>I don't plan on working on Forge or redplanet every single day. This is meant to
take years after all. A few days a week seems like a respectable and reasonable
goal, and on the days I make progress, you can expect a blog post to go with it.
Today we make a gentle start, this blog post is my first step across the
starting line.</p>
C++ const-correctness in 20202020-02-11T00:00:00+00:002020-02-11T00:00:00+00:00https://rycwo.dev/blog/cpp-const-correctness/<p>It has been a while since I last wrote something for the blog. This should only
take about a minute.</p>
<p>I don't know if writing C++ used to be fun. But it sure isn't any more.</p>
<blockquote>
<p>A <a href="https://en.cppreference.com/w/cpp/language/constexpr"><code>constexpr</code></a> specifier used in an object declaration or
non-static member function (until C++14) implies <code>const</code>.</p>
</blockquote>
<p>In other words, a <code>constexpr</code> non-static member function is not implicitly
<code>const</code> from C++14 onwards.</p>
<pre data-lang="cpp" class="language-cpp "><code class="language-cpp" data-lang="cpp">class Foo {
constexpr Bar const&
do_something(std::shared_ptr<const Bar> const& bar, float const baz) const;
// ...
};
</code></pre>
<p>Welcome to the year 2020 ladies and gentlemen. Can't wait to write a ton of C.</p>
<hr />
<p>PS, this is all <code>const</code> "tongue-in-cheek". Take with a <code>const</code> grain of salt.</p>
<p>PPS, know how to make it <code>const</code> worse? Please <a href="mailto:rycwo@posteo.net">let me
know</a>.</p>
Six Months Into 2019, An Update2019-06-09T00:00:00+00:002019-06-09T00:00:00+00:00https://rycwo.dev/blog/first-six-months-2019-update/<p>This first half of the year has been productive and inspirational.</p>
<p>To start with, enough time has been invested into this blog to make it into a
platform I am motivated to progressively add to as I learn and discover new
software development-related tidbits in my life! I <a href="https://rycwo.dev/archive/nixos-series-005-dev-env/">wrapped up the series on my
NixOS setup</a> and tried my best to <a href="https://rycwo.dev/archive/rust-wasm-interpolation/">explain multivariate
interpolation in a little WebAssembly demo I wrote in Rust</a>.</p>
<p>In February, I made my first sizable<sup class="footnote-reference"><a href="#1">1</a></sup> contribution to open source software!
The project I contributed to is <a href="https://sourcehut.org/">Sourcehut</a>. It's <a href="https://drewdevault.com/2019/03/04/sourcehut-design.html">"brutalist"
design</a> and Unix-style, modular components make it appealing
to me. More importantly, it is <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FLOSS</a>. I recently learned more about
what software "freedom" really meant and why it is important. In particular, I
came to the understanding that contributing to open source software is
worthwhile because you're doing it for yourself, and if you're fortunate, your
changes also end-up benefiting the whole community.<sup class="footnote-reference"><a href="#2">2</a></sup> Sourcehut is being
developed/spearheaded by Drew DeVault who works on a number of projects but is
perhaps best known for his work on <a href="https://github.com/swaywm/sway">sway</a> and <a href="https://git.sr.ht/~sircmpwn/aerc/">aerc</a>. Some
<a href="https://lists.sr.ht/~sircmpwn/sr.ht-discuss/%20%3CCACKoU+6LSiZyZfuN2rQJNYmHTsbXGfaQ8GS3OCpzd6+w4kJ31A%40mail.gmail.com%3E">discussion</a> has been made on the next bit of work I hope to
contribute, although I have yet to find time outside of work to make any
significant progress on it.</p>
<p>Without ever having had a formal education in computer science, I had always
felt lacking in terms of algorithmic thinking and analysis. This March I started
taking a self-paced course, <a href="https://lagunita.stanford.edu/courses/course-v1:Engineering+Algorithms1+SelfPaced/about"><em>Algorithms: Design and Analysis</em></a> by
Tim Roughgarden, in order to fill in the gaps. It has been admittedly quite
demanding to juggle alongside work, and despite the exercises being impractical
for production, the principles/concepts that I picked-up have already proven to
be beneficial in my day-to-day work. I'm happy to say that I finished the course
at the end of May (which has finally opened-up some time for me to catch up on
my blog)!</p>
<hr />
<p>That summarizes the first half of the year, which brings me to this quotation.
A recurring theme of the past six months has been, <strong>finding the courage to
challenge existing methodologies, whilst motivating inventiveness using
foundational truths.</strong></p>
<blockquote>
<p>They did not know it was impossible so they did it. — Mark Twain</p>
</blockquote>
<p>Shortly after joining <a href="https://www.moving-picture.com/">MPC</a> to write software, whenever I learned that
something in my work was considered "bad practice", I made it my mission to
"learn" from my mistake and avoid said something like the plague. Over time I
had accumulated a hoard of so many different rules that I was writing
"<a href="https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition">effective enterprise programs</a>", but had lost the ability to
creatively and elegantly solve problems, instead, I was chasing best practices
and playbooks. As soon as I realized this, I left my job to seek constructive
criticism from other people, people I had never worked with, who had the
capacity to question the choices made in my code, and force me to validate and
re-evaluate the rules I had built-up.</p>
<p>Risk averseness comes from experience, and that experience is essential to
enable one to see multiple, complex systems as a bigger picture, hence why
seniority is valued in the workplace. Yet, sometimes we need to review these
experiences after they have settled, to see if they are still justified. Finding
the balance between hardened experience and progression is often difficult, and
I strongly believe that taking our time to find that sweet spot is necessary in
order to maintain and produce healthy software.</p>
<p>Further to my point, it has come to my attention, that the software community is
"progressing" towards an increasingly self-destructive and unsustainable future.
With software being an important extension to our regular human limits, this is
pretty concerning! I will save myself some words and let a couple of great
lectures from <a href="https://en.wikipedia.org/wiki/Jonathan_Blow">Jonathan Blow</a> and <a href="https://caseymuratori.com/about">Casey Muratori</a> do the
talking.</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=pW-SOdj4Kkk">Preventing the Collapse of Civilization</a></li>
<li><a href="https://www.youtube.com/watch?v=kZRE7HIO3vk">The Thirty Million Line Problem</a></li>
</ul>
<p>It is from these talks that I feel compelled to share my personal goals and
aspirations for: the short term (the next six months), and the long-term
foreseeable future. There is also something about making your goals public
that for one make them more official, and also puts a healthy amount of
accountability and expectation on yourself.</p>
<p>As a broad, overarching goal, I would like to revisit some of the core
directions that the computer graphics industry appears to be heading. What are
the implications of studios adopting standardized scene description by way of
<a href="https://graphics.pixar.com/usd/docs/index.html">USD</a>? How much of an artist's workflow is dictated purely (and naively) by
the programs they use? Some of which encompass a number of worryingly dated
practices that haven't ever been critically reviewed, most likely only seen as,
"how things have to be done."</p>
<p>I would like to begin my research by taking a look at rendering. I am hoping
that my little-to-no experience with rendering will bring novel ideas that
prompt more members of the industry to reconsider the current state and future
of computer graphics software.</p>
<p>On that line, I have only a couple of big points on my agenda for the next six
months. The first is to consolidate my understanding of Linear Algebra by taking
the course, <a href="http://ulaff.net/"><em>Linear Algebra: Foundations to Frontiers</em></a>, provided
by edX. The second is to begin to write a <a href="http://www.pbr-book.org/">physically based renderer</a>
in C.<sup class="footnote-reference"><a href="#3">3</a></sup> The project will be entirely open source, you can follow my progress
<a href="https://git.sr.ht/~rycwo/mars">here</a>.</p>
<p>All-in-all, this is clearly a long undertaking that will require effort and
patience, but as I mentioned earlier, taking one's time is an essential
ingredient in making meaningful change. J.R.R. Tolkien, for example, spent time
in the order of decades working on the universe in which he would unravel many
timeless characters and wonderful stories.</p>
<p>I would love to hear any thoughts and opinions by
<a href="mailto:rycwo@posteo.net">email</a>. Thanks for reading and happy hacking!</p>
<div class="footnote-definition" id="1"><sup class="footnote-definition-label">1</sup>
<p>That is, not counting small configuration/documentation patches. Albeit
this patch in partciular wasn't large, but it took me a week to put
together!</p>
</div>
<div class="footnote-definition" id="2"><sup class="footnote-definition-label">2</sup>
<p>If you want something done, do it yourself. See <a href="https://drewdevault.com/2019/01/01/Patches-welcome.html">Drew's
post</a> for a
better and more in-depth explanation.</p>
</div>
<div class="footnote-definition" id="3"><sup class="footnote-definition-label">3</sup>
<p>Why in C? This seems directly counter to my previous points about
challenging existing assumptions! In short, C has and still is
withstanding the test of time. Most of the software that I use on a daily
basis <strong>reliably</strong>, is written in C. It is a stable language that provides
nearly everything that other languages do, but did so many years before
they came to be. I would use Go, but garbage collection is a dealbreaker
for performance. C++ has become a monster of cruft, and Rust tries to
solve many of those problems whilst hiding too many of the
interesting-yet-gory details. Ultimately, C is arguably more simple and
powerful than any of the existing options.</p>
</div>
Making Rainbows With Rust And WebAssembly2019-03-03T00:00:00+00:002019-03-03T00:00:00+00:00https://rycwo.dev/archive/rust-wasm-interpolation/<style>
figure {
display: flex;
flex-flow: row wrap;
justify-content: center;
}
figure img {
margin: 1px;
}
</style>
<h1 id="a-millennial-spiral">A millennial spiral</h1>
<p>If you saw my previous post (the post has now been removed), you will have seen
that I took a small break from programming to play around with designing a new
icon for the blog. As always, the design process wasn't so straightforward. In
the spirit of a developer raised in an age full of technological distractions, I
somehow spiraled completely off-topic and wound-up examining methods for
<a href="https://en.wikipedia.org/wiki/Multivariate_interpolation">multivariate interpolation</a>. My thinking was that I needed a more
advanced gradient/blending tool than what photo editing programs currently
provide, in order to make a pretty gradient for the icon background!</p>
<p>For those of you who just want to mess about with pretty gradients, you can
<a href="https://rycwo.gitlab.io/colormap/">play around with the demo</a>.</p>
<h1 id="building-the-demo">Building the demo</h1>
<p>To reiterate in more detail: I wanted to create a tool which, given a number of
colors positioned arbitrarily in 2D space, would be able to interpolate between
them and produce interesting gradients/maps.</p>
<p>Over the past year or so, most of my free time has gone into learning
<a href="https://en.wikipedia.org/wiki/C%2B%2B17">Rust</a>. Coming from a C++ background, many of its language features feel
fun, fresh, and intuitive to use. But I'm not here to advocate the language
itself, instead, I wanted to take a look at how I used Rust to build a tiny
WebGL application. Although tangential, this experience turned out to be a
worthwhile foray into the state of Rust and WebAssembly (WASM) at the end of
2018.</p>
<p>I spent over a month intermittently messing about with <a href="https://github.com/rustwasm/wasm-bindgen">wasm-bindgen</a> and
trying to make sense of the SIGGRAPH 2010 course, <a href="https://dl.acm.org/citation.cfm?id=1900522"><em>Scattered Data Interpolation
for Computer Graphics</em></a> by Ken Anjyo et al. I managed to build a demo
using <a href="https://microsoft.github.io/monaco-editor/index.html">Monaco</a> to edit JSON input, whilst displaying the result in a
WebGL quad. Here are just some of the abstract masterpieces I generated.</p>
<figure>
<img src="artifact.01.png"/>
<img src="artifact.02.png"/>
<img src="artifact.03.png"/>
<img src="artifact.04.png"/>
<img src="artifact.05.png"/>
<img src="artifact.06.png"/>
</figure>
<p>Some combinations create some pretty wacky results! I've placed the JSON snippet
for the bottom-right example on <a href="https://gitlab.com/snippets/1831765">GitLab</a> for those that are
interested (try toggling the <code>visualize_fields</code> option for even more wacky
goodness). Admittedly, the more regular-looking results are slightly
underwhelming - not unlike a bunch of radial gradients in Photoshop slapped
on-top of one another - but, I learned a lot in the process, and that's what
really matters after all!</p>
<p>Within the month of hacking: about a third of the time was spent trying to
simply get started; another third was better invested in iterating and
experimenting with interpolation algorithms; the last third was unfortunately
spent wrangling with CSS.</p>
<h2 id="burden-of-web-development">Burden of web development</h2>
<p>Web development has always been a particularly impenetrable region of software
for me. This is probably in part my own fault as I haven't devoted enough of my
resources to the craft. Yet through this experience, it became clear to me that
the unstable landscape of trends and practices can be incredibly discouraging to
newcomers. Learning anything within this ecosystem can hugely unrewarding
because, by the same time next year, your knowledge would most likely be
out-of-date.</p>
<p>In the computer graphics scene, on the other hand, developers spend less time
iterating on the infrastructure side, and more time on core algorithms and
mathematics. Development and change naturally occur at a slower rate, and
cognitive load is pushed into a different area where building robust,
mathematical tooling is valued over chasing trends.</p>
<p>Some reasons why I chose to use <a href="https://github.com/rustwasm/wasm-bindgen">wasm-bindgen</a>:</p>
<ul>
<li>Minimal JavaScript required to build a front-end. More on this in a bit.</li>
<li>Can be embedded on the blog something that readers could play with.</li>
<li>WebAssembly should give me performance necessary for computer graphics.<sup class="footnote-reference"><a href="#1">1</a></sup></li>
<li>Traditional native windowing/GUI is a pain, HTML and CSS provide expressive
freedom.</li>
</ul>
<p>With wasm-bindgen I was able to solve most of my problem in Rust, whilst only a
thin layer required any web-specific knowledge. Unfortunately, the amount of
configuration needed to get a nice webpack/node set-up isn't minimal. I sit in
an awkward spot where I would like to understand as much of what I am building
as possible, but also not have to care as much for the parts that I am not
interested in. It would be a huge improvement for developers like me if tools
like webpack/node would be more "batteries-included" and require less
configuration to get up-and-running.</p>
<p><a href="https://github.com/rustwasm/wasm-pack">wasm-pack</a> is one solution to my problem, but it creates a lot of magic
around the "getting started" process. Instead of <strong>simplifying</strong> the set-up, it
<strong>automates</strong> it, which are two different approaches in my opinion, although I
understand this might not be so much an issue with the Rust + WASM ecosystem
itself.</p>
<p>Aside from my complaints, wasm-bindgen itself was simple and easy to use. It
boiled down to tagging whatever I wanted to expose on the JavaScript side with a
single macro, <a href="https://rustwasm.github.io/wasm-bindgen/api/wasm_bindgen/index.html"><code>#[wasm_bindgen]</code></a>.</p>
<pre data-lang="rust" class="language-rust "><code class="language-rust" data-lang="rust">use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub struct ColorMapDisplay {}
#[wasm_bindgen]
impl ColorMapDisplay {
#[wasm_bindgen(constructor)]
pub fn new() -> ColorMapDisplay {}
/// Initialize WebGL stuff.
pub fn init(&mut self) -> Result<(), JsValue> {}
/// Update the color mapping according to the given JSON configuration.
pub fn update(&mut self, json: &str) -> Result<(), JsValue> {}
/// Draw the result of the interpolated colors using WebGL.
pub fn draw(&self) -> Result<(), JsValue> {}
// Etc.
}
</code></pre>
<ul>
<li>To make things easier for myself, I made a helper struct that would handle
talking to JavaScript and make all the GL calls.</li>
<li>By tagging the <code>impl</code> block, bindings are automagically generated for any
public methods.</li>
<li>Using the <code>wasm-bindgen</code> command-line tool, we spit-out a bunch of TypeScript
(?) files that enable us to import the WASM module in JavaScript.</li>
</ul>
<pre data-lang="js" class="language-js "><code class="language-js" data-lang="js">const rust = import("./colormap");
rust.then(module => {
// Create the display helper.
const display = new module.ColorMapDisplay();
// Etc.
}).catch(console.error);
</code></pre>
<p>An interesting question I had yet to test was how <a href="https://doc.rust-lang.org/std/ops/trait.Drop.html"><code>Drop</code></a> is
managed through the bindings. My assumption would be that it behaves as you
would expect, in that when the struct goes out-of-scope in JavaScript land, it
is dropped accordingly.</p>
<p>On another note, as image processing algorithms are often embarrassingly
parallel - considering cases where the value of each pixel can be computed
independently of the others - I was hoping to be able to jam <a href="https://github.com/rayon-rs/rayon">Rayon</a> into
the demo at the end and get a "free" performance boost. Unfortunately, it
seemed as though the crate was yet to be WASM-able, although I imagine it should
be coming soon<sup class="footnote-reference"><a href="#2">2</a></sup></p>
<h2 id="scattered-data-interpolation">Scattered data interpolation</h2>
<p>Let's take a look at the mathematics behind the interpolation algorithms to
understand the output images better. Quick thumbs-up to the <a href="https://www.nalgebra.org/">nalgebra</a>
developers for their great work on building a Rust equivalent of <a href="http://eigen.tuxfamily.org/index.php?title=Main_Page">Eigen</a>.</p>
<p>Plain and simple <a href="https://en.wikipedia.org/wiki/Linear_interpolation">linear interpolation</a> is an effective method
for evenly blending between two points of data. There are plenty of other forms
of interpolation, from <a href="https://en.wikipedia.org/wiki/Cubic_Hermite_spline">smoothstep</a> to <a href="https://en.wikipedia.org/wiki/Cubic_Hermite_spline">cubic
spline</a>, which uses four data points. In the end, interpolation
is used to compute unknown values within the range of a discrete set of known
points of data. More often than not, this boils down to a weighted average of
said data points.</p>
<p>What I hadn't realized, was that many methods existed for interpolation of
\(N\) number of data points. These methods have endless applications: from
obvious tasks such as image reconstruction or building topographic maps from
sparse data sets; to more creative industries, such as <a href="https://dl.acm.org/citation.cfm?id=344862">pose space
deformation</a> for better shape interpolation of skeleton-driven deformation.</p>
<p><a href="https://en.wikipedia.org/wiki/Inverse_distance_weighting">Shepard's method</a>, or inverse distance weighting (IDW), is a common method
of multivariate interpolation. To compute the value of an unknown point, it is
essentially a weighted average of its distance from the set of known points,
where the <strong>weight increases as the distance decreases</strong>. This has been
implemented in the <a href="https://rycwo.gitlab.io/colormap/">demo</a>, you can give it a try using the following
option.</p>
<pre data-lang="json" class="language-json "><code class="language-json" data-lang="json">"algorithm": {
"Shepard": {
"power": 2.0,
"epsilon": 0.001
}
}
</code></pre>
<p>One of the interesting properties of the algorithm is that as the power
increases, the colors more closely approximate a <a href="https://en.wikipedia.org/wiki/Voronoi_diagram">Voronoi diagram</a>.</p>
<figure>
<img src="rbf_gaussian_example.png"/>
</figure>
<p>As the power increases, a side effect of the algorithm also becomes more
obvious. Whereby if the queried value sits on top of one of the known data
points we end-up computing zero weights. The naive way to work around this is
to increase the size of the epsilon value used when checking for zero-length
distances.</p>
<figure>
<img src="rbf_gaussian_example.png"/>
</figure>
<h3 id="radial-basis-functions-rbf">Radial basis functions (RBF)</h3>
<p>Radial basis functions, as far as I understand them (which isn't very far,
thankfully the literature for RBFs enables me to implement it without a full
understanding), make-up a method of interpolation, where the interpolated
"surface" or "result" is a combination of basis functions. Wikipedia's
mind-blowing explanation helped me wrap my head around this.</p>
<blockquote>
<p>Every continuous function in the function space can be represented as a linear combination of
basis functions, just as every vector in a vector space can be represented as a linear
combination of basis vectors.</p>
</blockquote>
<p>One of the key benefits of using RBFs is its ability to generate values
<strong>outside</strong> of the range of known values.<sup class="footnote-reference"><a href="#3">3</a></sup> This produces much smoother
functions than IDW. Another nice property of the method is that it can be
evaluated for nearly anything which can define a distance function. In our
case, we use the Euclidean distance between positions in \(\mathbb{R}^2\).</p>
<p>The SIGGRAPH paper gives a decent description of the implementation, but for my
own understanding I thought I would regurgitate it with a bit more specificity.
We can essentially express the problem as a matrix multiplication in the form
\(AX = B\), where:</p>
<ul>
<li>\(A\) is a square matrix filled with the results of evaluating an RBF kernel
for the distances between every known position with each other, i.e., its size
will increase exponentially by the number of defined data points.</li>
<li>\(X\) is a column vector of (unknown) weights.</li>
<li>\(B\) is a column vector of the known values.</li>
</ul>
<p>Solving the RBF becomes as simple as solving the unknown matrix \(X\). We can
imagine this as, <em>"What are the weights \(X\), so that the RBF will produce
the known values \(B\) when evaluating it at each of their respective known
positions?"</em></p>
<p>Each value in \(B\) is expected to be a single scalar value. If values have
more components, like our tuples of red, green, and blue, each component is
evaluated independently. Thankfully, nalgebra handles this for us, <a href="https://www.nalgebra.org/decompositions_and_lapack/">solving the
systems simultaneously</a>. When we enable the <code>visualize_fields</code>
option in the <a href="https://rycwo.gitlab.io/colormap/">demo</a>, we can see that each color component is computed
independently of the others.</p>
<p>Here we have a comparison of evaluating two different RBF kernels. Gaussian on
the left, and inverse multiquadric on the right.</p>
<figure>
<img src="rbf_gaussian_example.png"/>
<img src="rbf_invmultiquadric_example.png"/>
</figure>
<p>It was quite interesting to compare the two interpolation methods I implemented.
Below we have IDW (left) and RBF (right). Each data point is very clearly
visible with IDW, whereas RBF produces a much smoother gradient. With RBF, the
distances between color components are not equal, so the boundaries between each
data point become less well-defined, e.g., we can see the green from the bottom
right point "bleeding" more into the red of the bottom left.</p>
<figure>
<img src="idw_example.png"/>
<img src="rbf_example.png"/>
</figure>
<p>The differences between the two methods are made more obvious when we enable
visualization of the field for each color component.</p>
<figure>
<img src="idw_fields_example.png"/>
<img src="rbf_fields_example.png"/>
</figure>
<p>You can play around with the demo <a href="https://rycwo.gitlab.io/colormap/"><strong>here</strong></a>! This is implemented almost
entirely in WASM with the exception of the JSON editor which uses
<a href="https://microsoft.github.io/monaco-editor/index.html">Monaco</a>. The source is available on <a href="https://gitlab.com/rycwo/colormap">GitLab</a>.</p>
<hr />
<p>Building this tool has been an educational experience. Being able to bind Rust
to WASM so easily is incredibly exciting and is sure to be a big part of the
near future - a number of the <a href="https://readrust.net/rust-2019/">2019 wishlist blog posts</a> point in
that direction after all.</p>
<p>I have <a href="http://spatialslur.com/">Dave Reeves</a> to thank for the help in understanding the
mathematics behind the interpolation methods. I personally still have ways to go
to be comfortable with reading an equation-filled paper. An interesting
afterthought would be to play with interpolation of colors in different <a href="https://en.wikipedia.org/wiki/Color_space">color
spaces</a>. It turns out RGB doesn't necessarily blend very well and
can produce some rather muddy colors (red-green in particular).</p>
<p>To get started with Rust and WASM yourself, I would suggest reading the <a href="https://rustwasm.github.io/book/introduction.html"><em>Rust
and WebAssembly</em></a>. For the "learn-by-example" inclined, take
advantage of the plethora of examples available in the wasm-bindgen
<a href="https://rustwasm.github.io/wasm-bindgen/">guide</a>.</p>
<h1 id="helpful-links">Helpful links</h1>
<ul>
<li><a href="http://www.scholarpedia.org/article/Radial_basis_function">Scholarpedia on RBFs</a></li>
<li><a href="https://pro.arcgis.com/en/pro-app/help/analysis/geostatistical-analyst/how-radial-basis-functions-work.htm">RBFs for Geostatistical Analysis</a></li>
</ul>
<div class="footnote-definition" id="1"><sup class="footnote-definition-label">1</sup>
<p>Traditionally-speaking, web apps fail to reach necessary requirements
without leveraging native hardware (think <a href="https://facebook.github.io/react-native/">React Native</a>).</p>
</div>
<div class="footnote-definition" id="2"><sup class="footnote-definition-label">2</sup>
<p>Some complications with "spawning Web Workers" means Rayon can't be
compiled to WASM just yet. <a href="https://rustwasm.github.io/2018/10/24/multithreading-rust-and-wasm.html">But the WG is getting
there!</a></p>
</div>
<div class="footnote-definition" id="3"><sup class="footnote-definition-label">3</sup>
<p>In fact, I've had to <a href="https://gitlab.com/rycwo/colormap/blob/c63173bb010af4bf8f5f2c32c52589bfdcd1bca2/src/lib.rs#L464">clamp interpolated values to the standard 8-bit
color range</a> in the demo as there were cases where the
function would produce values over 1.0, which would overflow when casting to
<code>u8</code> color values.</p>
</div>
Diving Into NixOS (Part 4): Dev Workflow With Nix Shell2019-02-16T00:00:00+00:002019-02-16T00:00:00+00:00https://rycwo.dev/archive/nixos-series-005-dev-env/<h1 id="workflow-improvements-with-nix">Workflow improvements with Nix</h1>
<p>Development workflows are always interesting to examine and are oftentimes
beneficial to revise for your own sake once in a while. The more
efficient/correct your workflow, the less of a grind it is for you to get
started and iterate, that much is clear. At times, investments made in this
direction also facilitate adoption by fellow developers (think
<a href="https://www.vagrantup.com/">Vagrant</a>). In all honesty, the amount of time I've invested in
adopting a flow that works well for me, however, is probably disproportionate to
the time I've spent actually hacking away at something meaningful.</p>
<p>Just by using Nix/NixOS you already opt-in to some of many <a href="https://rycwo.dev/archive/nixos-series-003-configuration-primer/">awesome</a>
<a href="https://rycwo.dev/archive/nixos-series-004-configuring-xinit/">features</a>. Conveniently, the buck doesn't stop there. In this series,
we have yet to examine <a href="https://nixos.org/nix/manual/#sec-nix-shell"><code>nix-shell</code></a>, another powerful tool in the
Nix toolset, and how this can be coupled with <a href="https://direnv.net/">direnv</a> to make
development truly seamless and buttery smooth.</p>
<h1 id="smooth-sailing">Smooth sailing</h1>
<p>There are a whole number of factors that contribute to the time it takes to
set-up a new project or begin hacking on an existing project. My primary
concerns were:</p>
<ul>
<li>to be able to easily fetch dependencies for multi-language projects; and</li>
<li>to isolate these development environments to that specific project, keeping it
local instead of polluting the global system state.</li>
</ul>
<p>Both of these goals are met using <code>nix-shell</code>.</p>
<h2 id="nix-shell"><code>nix-shell</code></h2>
<p><a href="https://nixos.org/nix/manual/#sec-nix-shell"><code>nix-shell</code></a> is perhaps one of the most valuable tools in the Nix
toolset. In a sentence, it allows users to enter a sub-shell with specific Nix
packages set-up in a sort-of virtual environment. It is similar to <code>nix-build</code>
in that it receives a file defining a package as input, except it does not
execute the build, stopping beforehand and entering the environment to be used
for building the package. Through this, it's possible to define a single
<code>default.nix</code> for a piece of software you are developing, and use this to
package the project for Nix, whilst also using the same file to build an
environment suitable for the development of said project.</p>
<p>Here is an example <code>shell.nix</code> file I use for my Rust projects.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix">{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
buildInputs = with pkgs; [
latest.rustChannels.stable.rust
];
RUST_BACKTRACE = 0;
}
</code></pre>
<p>It's really just that simple. There are a couple of things to keep in mind.</p>
<ul>
<li>In the example, I'm using the Rust overlay in order to have specific versions
of tools like <a href="https://doc.rust-lang.org/cargo/"><code>cargo</code></a> available in my environment.</li>
<li>Key/value pairs in the set passed to <a href="https://nixos.org/nixpkgs/manual/#sec-pkgs-mkShell"><code>pkgs.mkShell</code></a> are
exported by Nix as environment variables (in fact, this piggybacks straight
off of <a href="https://nixos.org/nixpkgs/manual/#sec-using-stdenv"><code>stdenv.mkDerivation</code></a>, see
<a href="https://github.com/NixOS/nixpkgs/pull/30975">NixOS/nixpkgs#30975</a>). This is convenient for us to export
variables to the nix-shell.</li>
</ul>
<p>Here is another example of a quick <a href="https://git.sr.ht/~rycwo/ispc-bench">C++ hack I was working on last
year</a>.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix">{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
buildInputs = with pkgs; [
gbenchmark
gcc49 # Can specify a specific compiler version.
ispc
tbb
];
}
</code></pre>
<p>This is already 10-steps forward from other available solutions just in terms of
memory footprint and simplicity. No virtualization layers, etc. just plain-old
Nix package management. To make things even better however, we can save
ourselves some sub-shell madness using <a href="https://direnv.net/">direnv</a>.</p>
<h2 id="in-tandem-with-direnv">In tandem with direnv</h2>
<p><a href="https://direnv.net/">direnv</a> is an environment management tool developed by <a href="https://zimbatm.com/">zimbatm</a>. It
essentially sets and removes environment variables in the shell depending on the
current directory and the presence of a <code>.envrc</code> file.</p>
<blockquote>
<p>Before each prompt, direnv checks for the existence of a ".envrc" file in the
current and parent directories. If the file exists (and is authorized), it is
loaded into a bash sub-shell and all exported variables are then captured by
direnv and then made available to the current shell.</p>
</blockquote>
<p>It should be clear at this point that direnv is the perfect partner to
<code>nix-shell</code>. It even has <a href="https://github.com/direnv/direnv/wiki/Nix">built-in support</a> for capturing the
<code>nix-shell</code> environment. direnv can be used with <code>nix-shell</code> through a
one-liner.</p>
<pre data-lang="bash" class="language-bash "><code class="language-bash" data-lang="bash">use_nix
</code></pre>
<p>To see it all in effect, we just need to enter the directory with both files
defined.</p>
<pre data-lang="sh" class="language-sh "><code class="language-sh" data-lang="sh">$ cd project
$ direnv allow # One-time command.
direnv: loading .envrc
/home/rycwo/dev/repo/bigint-base10/.direnv/nix/shell.drv
direnv: export +AR +AS +CC +CONFIG_SHELL +CXX +HOST_PATH +IN_NIX_SHELL +LD +NIX_BINTOOLS +NIX_BINTOOLS_WRAPPER_x86_64_unknown_linux_gnu_TARGET_HOST +NIX_BUILD_CORES +NIX_BUILD_TOP +NIX_CC +NIX_CC_WRAPPER_x86_64_unknown_linux_gnu_TARGET_HOST +NIX_ENFORCE_NO_NATIVE +NIX_HARDENING_ENABLE +NIX_INDENT_MAKE +NIX_LDFLAGS +NIX_STORE +NM +OBJCOPY +OBJDUMP +RANLIB +READELF +RUST_BACKTRACE +SIZE +SOURCE_DATE_EPOCH +STRINGS +STRIP +TEMP +TEMPDIR +TMP +TMPDIR +WINDRES +_PATH +buildInputs +builder +configureFlags +depsBuildBuild +depsBuildBuildPropagated +depsBuildTarget +depsBuildTargetPropagated +depsHostHost +depsHostHostPropagated +depsTargetTarget +depsTargetTargetPropagated +doCheck +doInstallCheck +name +nativeBuildInputs +nobuildPhase +out +outputs +phases +propagatedBuildInputs +propagatedNativeBuildInputs +shell +stdenv +strictDeps +system ~PATH
</code></pre>
<p>Exiting the directory cleans everything up.</p>
<pre data-lang="sh" class="language-sh "><code class="language-sh" data-lang="sh">$ cd ..
direnv: unloading
</code></pre>
<p>Needless to say, I have yet to come across any simpler workflow!</p>
<h2 id="drawbacks-of-this-approach">Drawbacks of this approach</h2>
<p>One of the difficulties I found in transitioning to using Nix was drawing the
line between using Nix to manage project dependencies/packages vs. their
respective solutions (<a href="https://doc.rust-lang.org/cargo/">cargo</a>, <a href="https://pip.pypa.io/en/stable/">pip</a>,
<a href="https://github.com/junegunn/vim-plug">vim.plug</a>). Using Nix works well for development projects that expect
you to manage your own dependencies such as C/C++, whereas I've found that with
projects written in say Rust or Python, you end up having to work against an
existing solution, generating additional files just to be able to work in a
"Nix-like" manner.</p>
<p>In one sense it could be described as an anti-pattern. The amount of effort
needed from the community to collect <em><a href="https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/python-modules">every</a>
<a href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/node-packages-v6.nix">single</a> <a href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules/generated-packages.nix">package/module</a></em> and their dependencies
etc. seems to be more effort than it's worth even despite the automate-able
nature of the problem. On the developer's side, everything works fine if you
follow the Nix workflow until you have to package dependencies yourself, whether
they are niche packages not available in nixpkgs or your own packages that act
as local dependents.</p>
<p>In almost all of the provided tooling, language packages/modules would typically
be downloaded to somewhere within the user's home directory, which seems like a
sane idea to me and avoids pollution of the system itself. Perhaps ultimately
I have yet to run into a problem with this setup that required me to re-consider
going down the Nix path.</p>
<h1 id="additional-tips">Additional tips</h1>
<p>Aside from the magical <code>nix-shell</code> and direnv combination, there are a few other
little bits that I wanted to share that didn't seem long enough to have their
own post.</p>
<h2 id="patching-st">Patching st</h2>
<p>As a fan of the <a href="https://suckless.org/">suckless</a> <a href="https://suckless.org/philosophy/">philosophy</a>, I try to keep my
set-ups lightweight and minimal. Mileage down either of these lines varies on a
case-by-case basis, but my terminal-emulator-of-choice is an example of where I
am quite happy with my progress. <a href="https://st.suckless.org/">st</a> is one of the most simple
implementations of fully-featured terminal emulators I've encountered. Aside
from being incredibly small and light, it shines in its approach to
configuration.</p>
<p>All the configuration for st is defined in <code>config.h</code>. The configuration is
baked into the compiled executable, which works great because let's be honest -
how often do you modify your terminal settings anyway? It saves the software
from being bloated with runtime customization options that are really only used
one-percent of the total time spent on the terminal.</p>
<p>There are a few strategies we could use to configure the <a href="https://github.com/NixOS/nixpkgs/blob/9bd45dddf8171e2fd4288d684f4f70a2025ded19/pkgs/applications/misc/st/default.nix">st</a> Nix
package, but by far the most straightforward method is to take advantage of the
<a href="https://nixos.org/nixpkgs/manual/#ssec-patch-phase"><strong>patch</strong> phase</a> of <a href="https://nixos.org/nixpkgs/manual/#sec-using-stdenv"><code>stdenv.mkDerivation</code></a>.
Generating a patch with your configuration options can be done using <code>diff -Naur a b > override-st-config.patch</code>, where <code>a</code> and <code>b</code> are the original and modified
versions of the config respectively. The final step is to override the <code>patches</code>
attribute when listing packages in <code>configuration.nix</code>.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix">environment.systemPackages = with pkgs; [
# ...
(pkgs.st.overrideAttrs (oldAttrs: {
patches = [ /path/to/override-st-config.patch ];
}))
];
</code></pre>
<h2 id="wrapping-neo-vim">Wrapping (Neo)Vim</h2>
<p>As a <a href="www.vim.org">Vim</a> user, I have invested quite a bit time in tuning my Vim
configuration and plugins (fellow Vim users will understand)! Some of you may
have come across plugins such as <a href="https://github.com/vim-syntastic/syntastic">Syntastic</a>, <a href="https://github.com/w0rp/ale">ALE</a>, or
the more recent <a href="https://github.com/autozimu/LanguageClient-neovim">LanguageClient</a> (to be used in conjunction with
<a href="https://microsoft.github.io/language-server-protocol/">language servers</a>). Each of these plugins augments Vim with IDE-like
functionality, but they all depend on having external tools (<a href="https://clang.llvm.org/">Clang</a>,
<a href="https://github.com/PyCQA/pyflakes">Pyflakes</a>, to name a couple) available in the system PATH. The
knee-jerk reaction would be to install these tools directly with your package
manager, but then we would be introducing project-specific tools into the global
environment. Naturally, as this series is about using Nix, I would like to share
a Nix-y solution that has been working well for me.</p>
<p>One of the neat things about the way packages are built in Nix is their
"composability". Digging through nixpkgs, it appears to be quite a <a href="https://github.com/NixOS/nixpkgs/search?q=wrapProgram&unscoped_q=wrapProgram">common
pattern</a> to wrap existing packages to provide additional
functionality. We can apply this mentality to wrapping Vim so that all of our
plugin dependencies are available during runtime, and it works well because the
dependencies are closed within the wrapper environment, keeping our system
clean. This can be done quite simply with <a href="https://github.com/NixOS/nixpkgs/blob/c7811781be5fe62520d651fee05da7bf376dd44b/pkgs/build-support/setup-hooks/make-wrapper.sh#L139"><code>wrapProgram</code></a>.
To see an example, you can refer to my <a href="https://git.sr.ht/~rycwo/workspace/blob/39844721282d5a81710b026b71b907c3df20140c/nixos/user/pkgs/neovim/default.nix">own NeoVim wrapper</a>.</p>
<hr />
<p>Taking over a year to wrap-up a series is a testament to my millennial attention
span, though I'm happy to say this is the last post in the series. I hope it was
helpful in some way to you and may have in some way convinced you to give
Nix/NixOS a try. If it is too daunting to dive straight in, consider playing
around with it in a VM. I did so myself and discovered a few advantages along
the way:</p>
<ul>
<li>you can hack away as you like without having to re-install the OS if you brick
it, just <code>nixos-rebuild switch --rollback</code>;</li>
<li>and when you're happy with the configuration, you can copy your
<code>configuration.nix</code> when you install the OS on your host machine and
<code>nixos-rebuild switch</code> will give you an identical system configuration!</li>
</ul>
<p>Have fun, and happy hacking!</p>
<h1 id="helpful-links">Helpful links</h1>
<ul>
<li><a href="https://nixos.wiki/wiki/Development_environment_with_nix-shell">NixOS wiki on same topic</a></li>
<li><a href="https://github.com/direnv/direnv.vim">direnv wiki on Vim integration</a></li>
<li><a href="https://fluffynukeit.com/installing-virtualbox-for-nixos/">Setting up NixOS on VirtualBox</a></li>
<li><a href="https://drewdevault.com/2019/01/23/Why-I-use-old-hardware.html">Drew DeVault on minimalism/old hardware</a></li>
</ul>
Diving Into NixOS (Part 3): Lightweight Startup With xinit2019-02-07T00:00:00+00:002019-02-07T00:00:00+00:00https://rycwo.dev/archive/nixos-series-004-configuring-xinit/<h1 id="brief-case-study">Brief case study</h1>
<p>In this post, we continue on directly from <a href="https://rycwo.dev/archive/nixos-series-003-configuration-primer/">part 2</a>. I would recommend
reading the previous part to gain an understanding of the Nix ecosystem if you
are not already familiar with it. In the spirit of continuing the train of
thought, instead of plainly listing some of the configuration options and
explaining them without any context, I thought it would be more interesting to
examine the steps I took to configure <a href="https://en.wikipedia.org/wiki/Xinit">xinit</a>.</p>
<h2 id="configurable-services">Configurable services</h2>
<p>As mentioned, NixOS has a <a href="https://nixos.org/nixos/options.html">wealthy host of configuration options</a>
that can be set in the <code>configuration.nix</code> file. The set of options that are of
interest to us are the services. As far as I'm concerned, each of these is
more-or-less mapped to the setup of one-or-more <a href="https://freedesktop.org/wiki/Software/systemd/"><code>systemd</code></a> services.
Some examples include <a href="https://www.cups.org/">CUPS</a>, for printing; or <a href="https://docs.gitlab.com/runner/">GitLab's CI
runner</a>, for dedicated runners; or most importantly in our case,
a <a href="https://nixos.org/nixos/options.html#services.xserver">service</a> for managing <a href="https://www.x.org/wiki/">X11</a>.</p>
<h3 id="x-server-options">X server options</h3>
<p>When installing NixOS, the default <code>configuration.nix</code> will probably have
already filled-out some sane settings for the xserver. These defaults allow
users to log in to the system after a fresh install via a display manager before
it throws you into the desktop environment.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix">{
# ...
# Enable the X11 windowing system.
services.xserver.enable = true;
services.xserver.layout = "us";
services.xserver.xkbOptions = "eurosign:e";
# Enable touchpad support.
services.xserver.libinput.enable = true;
# Enable the KDE Desktop Environment.
services.xserver.displayManager.sddm.enable = true;
services.xserver.desktopManager.plasma5.enable = true;
# ...
}
</code></pre>
<h3 id="getting-nix-to-play-nicely-with-xinit">Getting Nix to play nicely with xinit</h3>
<p>For many, this may just as well be exactly what they want/need. If you're still
reading, however, chances are you would like to avoid using a bloated desktop
environment altogether and keep things light. This can be achieved using a
combination of standard shell login and xinit to boot into a window manager.</p>
<ol>
<li>Log in to tty using username and password.</li>
<li>Run <code>startx</code>.</li>
</ol>
<p>For the most part, configuring xinit is quite simple - the <a href="https://wiki.archlinux.org/index.php/Xinit">Arch
Wiki</a> is our ever helpful resource in times like this.</p>
<pre data-lang="sh" class="language-sh "><code class="language-sh" data-lang="sh"># .xserverrc
#!/usr/bin/env sh
exec /run/current-system/sw/bin/Xorg -nolisten tcp -nolisten local "$@" "vt""$XDG_VTNR"
</code></pre>
<pre data-lang="sh" class="language-sh "><code class="language-sh" data-lang="sh"># .xinitrc
#!/usr/bin/env sh
exec bspwm
</code></pre>
<p>Note a couple of things:</p>
<ul>
<li>Remember that <del>all</del> most programs, libraries, etc. are symlinked under
respective directories under <code>/run/current-system/sw/</code>, so if you need to
specify the full path to a binary, that would be the first place to look.</li>
<li>In the example, I start <a href="https://github.com/baskerville/bspwm">bspwm</a> as my window manager (wm), but you could use
any wm you choose as long as they are built for X.</li>
<li>Naturally the <code>xorg.xinit</code>, <code>bspwm</code>, and <code>sxhkd</code> (bspwm requirement) packages
will need to be added to the <code>environment.systemPackages</code> list in the Nix
configuration in order to make them available to all users.</li>
</ul>
<p>Now because NixOS typically uses systemd to start X, unlike other Linux
distributions, all the system configuration files for Xorg (modules that define
drivers for graphics, input, etc.) are not available in a central directory.
This means that trying to just run <code>startx</code> after preparing our <code>.xserverrc</code> and
<code>.xinitrc</code> will not work.</p>
<p>My first intuition was that I should symlink additional directories of the
filesystem hierarchy that included the appropriate Xorg modules. This can be
done in a single line via the <code>environment.pathsToLink</code> option.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix">environment.pathsToLink = [ "/etc" ];
</code></pre>
<p>Unfortunately, although it works, it ends up symlinking the contents of other
packages with <code>/etc/</code> subdirectories. We refer to this as <strong>polluting</strong> the
filesystem which ends up exposing more files than we need and <a href="https://discourse.nixos.org/t/nixos-without-a-display-manager/360/11?u=rycwo">leaving the
system configuration dependent in a state that is no longer
atomic</a> - an unsatisfactory solution.</p>
<p>With thanks to <a href="https://discourse.nixos.org/t/nixos-without-a-display-manager/360/7?u=rycwo">the friendly NixOS community</a>, I
discovered that a similar effect could be achieved by disabling
<code>services.xserver.autorun</code> and enabling <code>services.xserver.exportConfiguration</code>.</p>
<blockquote>
<p>Whether to symlink the X server configuration under /etc/X11/xorg.conf.</p>
</blockquote>
<p>By enabling the option, our environment still polluted, but only with the Xorg
modules, allowing xinit to read the configuration from the expected directory
and launch correctly!</p>
<p>Through this experience and community feedback, I learned a few additional
things about X11. For example, <code>startx</code> can receive a number of options on the
command line, such as the program to execute when X is initialized, i.e. <code>startx /run/current-system/sw/bin/bspwm</code>. Another interesting point was that unless it
is necessary, it is good practice to leave the resolution parameter in the
xserver options unset as <a href="https://www.x.org/wiki/Projects/XRandR/"><code>xrandr</code></a> can detect the appropriate monitor
resolution for us.</p>
<hr />
<p>With a single, key configuration option, we were able to bend NixOS in a way
that would allow us to use a hybrid approach to logging in and starting X.
Ideally, this would be implemented as a dummy display manager (it looks like
this is an officially supported option on the unstable NixPkgs channel - not
available in 18.09 Jellyfish at time of writing - since
<a href="https://github.com/NixOS/nixpkgs/pull/47773">#47773</a>) to avoid pollution of the
filesystem with extraneous symlinks.</p>
<p>In the <a href="https://rycwo.dev/archive/nixos-series-005-dev-env/">final part</a> of the series, we will take a look at how my
development workflow has greatly benefited from using <code>nix shell</code> in conjunction
with a couple of other utilities.</p>
<h1 id="helpful-links">Helpful links</h1>
<ul>
<li><a href="https://discourse.nixos.org/t/nixos-without-a-display-manager/360">Discourse topic</a></li>
<li><a href="https://nixos.wiki/wiki/Using_X_without_a_Display_Manager">NixOS Wiki page with alternative setup</a></li>
</ul>
Diving Into NixOS (Part 2): The Power Of Declarative Configuration2019-01-29T00:00:00+00:002019-01-29T00:00:00+00:00https://rycwo.dev/archive/nixos-series-003-configuration-primer/<h1 id="landing-on-planet-nixos">Landing on planet NixOS</h1>
<p>If you've ever taken a look at the Linux universe, you will have found numerous
flavors - more correctly known as distributions - of operating systems. Each
one slightly different from the other, almost like planets within a solar
system, or perhaps akin to the thousands of delicious flavors of ice cream (this
analogy sounds more appealing). The one thing keeping them all in orbit? The
singular, common star at the center that is the <a href="https://en.wikipedia.org/wiki/Linux_kernel">Linux kernel</a>.</p>
<p>Every distribution will have made a variety of design choices that brought them
into their respective orbits, each possessing different properties that make
them particularly desirable to different space travelers. Planet
<a href="https://www.ubuntu.com/">Ubuntu</a> for the plain Jane, <a href="https://www.kali.org/">Kali</a> for the security conscious, or
<a href="https://elementary.io/">Elementary</a> for the design savvy. Some of the distributions lie
light years away from our familiar planet Ubuntu (read <a href="https://www.archlinux.org/">Arch</a>), and
require the more foolhardy, renegade <a href="https://en.wikipedia.org/wiki/Space_Cowboys">space cowboys</a> to tame the lay of
the land, and build a self-sustaining civilization before they can start beaming
more spacefarers over.</p>
<p>In my case, I feel as though I was beamed right into <a href="https://nixos.org/">NixOS</a>' warm and
welcoming atmosphere, stumbling onto a medium-sized planet whose surface is
entirely composed of a beautiful-yet-functional botanical garden of software and
configuration. Beautiful, because it doesn't take shape to the user as a single,
clunky <a href="https://en.wikipedia.org/wiki/Desktop_environment">desktop environment</a>, but instead hosts a whole <a href="https://nixos.org/nixos/options.html#desktopmanager">floral arrangement
of options</a>. Functional, because each part of the garden is
organized into <a href="https://github.com/NixOS/nixpkgs">modular, declarative components</a>, allowing newcomers to
<a href="https://git.sr.ht/~rycwo/workspace/blob/39844721282d5a81710b026b71b907c3df20140c/nixos/system/configuration.nix">grow the "garden of their dreams"</a>.</p>
<h1 id="the-nixos-ecosystem">The NixOS ecosystem</h1>
<p>Enough with the analogies, what I really want to hone in on, are the reasons why
I jumped from Ubuntu to NixOS. In <a href="https://rycwo.dev/archive/nixos-series-001-dual-boot/">Part 1</a> of the series, we looked at
how I set-up my new laptop and made an initial installation of NixOS. Now, we
take a deep dive into configuring the OS, looking briefly at why it is
advantageous to have declarative control, before highlighting some of the
available options and the decisions I took to build my current setup - which
I've been using happily for about a half year at the time of writing.</p>
<p>As there will be snippets of configuration, it will probably be helpful to gloss
over the <a href="https://nixos.org/nixos/manual/index.html#sec-configuration-syntax">Nix language syntax</a>.</p>
<h2 id="why-is-nixos-declarative-approach-so-special">Why is NixOS' declarative approach so special?</h2>
<p>To understand the significance of declarative configuration, it is helpful to
gain some insight into the limitations of dealing with non-declarative, or
"ad-hoc", systems. With other distributions, such as Ubuntu and Debian, or for
the tinkerers - Arch Linux, you would typically use a combination of package
managers (<a href="https://wiki.debian.org/Apt"><code>apt</code></a>, <a href="https://wiki.archlinux.org/index.php/Pacman"><code>pacman</code></a>) to manage installed software and
their dependencies, alongside innumerable configuration scripts - some of which
are managed by <code>root</code>, and others which are confusingly overridden by the
regular user.</p>
<h3 id="traditional-systems">Traditional systems</h3>
<p>Consider the following scenario. You spend a painstaking amount of blood, sweat,
and tears to create the "perfect" Arch setup. As time passes, you make
modifications to your configurations, tuning the OS to your liking. You will
occasionally find yourself unhappy with the changes you've made and want to
revert them. Unfortunately, this means you will have to recall the five
different files to which you made your changes, not to mention what the changes
themselves were. Depending on your mood you ultimately end up either: spending
another hour or so recovering your StackOverflow answers and undoing your
changes bit-by-bit, or giving up and living with your undesired changes, leaving
your system just that little bit less "perfect".</p>
<p>Now consider another scenario. One in which you have a C/C++ project that, for
whatever reason, must be tested against multiple versions of <a href="https://gcc.gnu.org/">GCC</a> (I've
encountered similar requirements at work). This is near impossible, if not a
hassle, with standard package managers alone. A common solution would be to add
a layer of virtualization, typically in the form of <a href="https://www.docker.com/">Docker</a> or
<a href="https://www.vagrantup.com/">Vagrant</a>, both of which are excellent tools for specific problems but
add a layer of memory and computational overhead.</p>
<p>Let's not even begin to consider the possibility of building and installing
third-party software directly to the root directory <code>/</code> - this works fine, at
least until you want to uninstall it! Might I dare to detail the process of
migrating the entire shebang to a fresh install of the OS on your new rig?</p>
<p>You probably get the picture.</p>
<h3 id="nix">Nix</h3>
<p>NixOS avoids all of the aforementioned situations and introduces some added
benefits. The Nix package manager is at the heart of the OS' success. Similar to
other package managers, Nix also tracks dependencies between different pieces of
software. It's most glowing innovation, however, is in how it manages the
installed software under-the-hood and hands you a usable software stack.</p>
<p>Every software package in Nix is <a href="https://github.com/NixOS/nixpkgs">declared publicly on GitHub</a>. They
are each defined in Nix's own expression language, which in summary tells Nix
how to build the package from source ("should it run CMake, Autoconf, plain
Make?"), and what other software it depends on.</p>
<p>From a definition, Nix will build the software into its own directory (it
expects the child directory structure to match the <a href="https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard">Filesystem Hierarchy
Standard</a>, FHS) named after a computed <strong>hash based-on the inputs</strong> to the
build definition together with the package name and version. Here's an example
of how the <a href="http://mama.indstate.edu/users/ice/tree/"><code>tree</code> command</a> has been packaged on from my machine (<a href="https://raw.githubusercontent.com/NixOS/nixpkgs/b3eb903b48d71679cb69aadcd30384553a88c66a/pkgs/tools/system/tree/default.nix">Nix
definition</a>).</p>
<pre><code>/nix/store/x50x926i805qz046qbhssj5r6w2w05a6-tree-1.7.0/
</code></pre>
<p>And the directory contents (<code>tree</code>-ing tree, how meta).</p>
<pre><code>$ tree /nix/store/x50x926i805qz046qbhssj5r6w2w05a6-tree-1.7.0/
/nix/store/x50x926i805qz046qbhssj5r6w2w05a6-tree-1.7.0/
├── bin
│ └── tree
└── share
└── man
└── man1
└── tree.1.gz
4 directories, 2 files
</code></pre>
<p>Notably, once a package has been built, the contents of its store directory are
<strong>immutable</strong>. Coupled with the hashes, each version of is guaranteed to be
atomic. It should be obvious from this, that managing multiple versions of a
software package alongside one another becomes trivial.</p>
<p>Our question then becomes: how is the software then made available to the user
if each version of each software lives independently of each other? The answer
is surprisingly simple.</p>
<pre><code>$ ls -al /run/current-system/sw/bin | tail -n5
lrwxrwxrwx 1 root root 70 Jan 1 1970 zipdetails -> /nix/store/7yf3fh95ljf90nnw6cv70dry5jvqin0l-perl-5.28.1/bin/zipdetails
lrwxrwxrwx 1 root root 62 Jan 1 1970 zless -> /nix/store/zrzqgdm6jxihsban195vrlcskmx9m4zc-gzip-1.9/bin/zless
lrwxrwxrwx 1 root root 62 Jan 1 1970 zmore -> /nix/store/zrzqgdm6jxihsban195vrlcskmx9m4zc-gzip-1.9/bin/zmore
lrwxrwxrwx 1 root root 61 Jan 1 1970 znew -> /nix/store/zrzqgdm6jxihsban195vrlcskmx9m4zc-gzip-1.9/bin/znew
lrwxrwxrwx 1 root root 77 Jan 1 1970 zramctl -> /nix/store/hlk44cpp9nn7isb1jycxcj5f9lz0qa1v-util-linux-2.32.1-bin/bin/zramctl
</code></pre>
<p>Everything is <a href="https://en.wikipedia.org/wiki/Symbolic_link"><strong>symlinked</strong></a>! Nix knows how and what to symlink
because either the built package follows the FHS or the package definition
prescribes the information accordingly. So for each of the packages the user
requires in their environment, the contents of the package directory under
<code>/nix/store/</code> are linked to common directories like <code>bin/</code> or <code>lib/</code>. Better
yet, using symlinks grants additional flexibility - changing the version of a
package means pointing the link to another target! Note that this also applies
to rolling back version changes - just point it back to the previous target.</p>
<p>This paradigm can be taken a step further and applied to the management of the
software and configuration for a whole OS, let alone per-user packages. In the
<a href="https://rycwo.dev/archive/nixos-series-001-dual-boot/">previous post</a>, we took a quick look at the <code>configuration.nix</code> file.
The OS bases its current state off this file, written in Nix's functional
expression syntax. Every new state of your machine that occurs as a result of
changes made to the file is versioned. Given the guarantees made by Nix, you can
be sure your whole setup is deterministic and hence reproducible. Rolling back
undesired changes to the OS works the same as rolling back software versions -
swapping a bunch of symlinks. Migrating your setup to a new machine is as
simple as copying the <code>configuration.nix</code> file over and running <code>nixos-rebuild</code>.</p>
<p>NixOS has quite an <a href="https://nixos.org/nixos/options.html">extensive catalog</a> of configuration options,
you may my current setup on <a href="https://git.sr.ht/~rycwo/workspace/blob/39844721282d5a81710b026b71b907c3df20140c/nixos/system/configuration.nix">GitLab</a>.</p>
<hr />
<p>When I began writing this post I didn't expect it to end-up quite so long.
Consequently, I've split this topic out into two posts. This one acts as a
primer Nix's design, whilst the <a href="https://rycwo.dev/archive/nixos-series-004-configuring-xinit/">next</a> is a more practical case study of
my xinit configuration.</p>
<p>Turns out there is quite a lot to say about the OS, despite its simplicity. I
implore you to read more about Nix/NixOS <a href="https://nixos.org/nix/about.html">here</a> and
<a href="https://nixos.org/nixos/about.html">here</a>. <a href="https://nixos.org/~eelco/">Whoever</a> took this concept and applied it at
OS level raised its potential to a whole new level. NixOS has even been taken a
step further and is being leveraged to provision infrastructure via
<a href="https://nixos.org/nixops/">NixOps</a>. I'm a huge fan of Hashicorp, but I'm skeptical
<a href="https://www.terraform.io/">Terraform</a> can match NixOS's simple-yet-functional power, although
I'll finish by stealing one of their marketing taglines which I feel also
applies to NixOS/Ops.</p>
<blockquote>
<p>Infrastructure As Code - <a href="https://www.terraform.io/">Terraform</a></p>
</blockquote>
<p><em>NB I've intentionally avoided mentioning <code>nix-shell</code> at this stage as we see more of it in
<a href="https://rycwo.dev/archive/nixos-series-005-dev-env/">Part 4</a>.</em></p>
<h1 id="helpful-links">Helpful links</h1>
<ul>
<li><a href="https://nixos.org/nixos/nix-pills/index.html">Nix Pills (tutorial series)</a></li>
<li><a href="https://nixos.org/~eelco/pubs/nixos-icfp2008-final.pdf">Paper by Eelco Dolstra on NixOS in 2008</a></li>
<li><a href="https://nixos.org/~eelco/pubs/nixos-jfp-final.pdf">Paper by Eelco Dolstra on NixOS in 2010</a></li>
</ul>
Diving Into NixOS (Part 1.5): Swap Files, And Other Tidbits2018-08-22T00:00:00+00:002018-08-22T00:00:00+00:00https://rycwo.dev/archive/nixos-series-002-swapfiles/<h1 id="a-brief-digression">A brief digression</h1>
<p>As mentioned in <a href="https://rycwo.dev/archive/nixos-series-001-dual-boot/">Part 1</a>, my swap partition may have been a little
excessive. In this post, I intend to demonstrate how to go about reducing the
swap size in favor of a swap file that is created only when it is needed. The
<a href="https://wiki.archlinux.org/index.php/Swap">Arch Wiki</a> has a healthy amount of information on swap for
those that are interested. Broadly speaking, the motivation behind having swap
space is the following (snippet from the Arch Wiki itself):</p>
<blockquote>
<p>[Enabling swap] avoids out of memory conditions, where the Linux kernel OOM
killer mechanism will automatically attempt to free up memory by killing
processes.</p>
</blockquote>
<h1 id="swap-files-on-nixos">Swap files on NixOS</h1>
<p>Configuring a swap file on NixOS is trivial. A quick search through the <a href="https://nixos.org/nixos/options.html">NixOS
options</a> shows the <code>swapDevices.*.size</code> option.</p>
<p>If you previously created a swap partition and ran <code>nixos-generate-config</code> in
your initial install of NixOS - as most would have, you will find that the
<code>swapDevices</code> option has already been configured in the file at
<code>/etc/nixos/hardware-configuration.nix</code>. Here's a snapshot mine at the time of
writing.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix"># Do not modify this file! It was generated by ‘nixos-generate-config’
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, ... }:
{
imports =
[ <nixpkgs/nixos/modules/installer/scan/not-detected.nix>
];
boot.initrd.availableKernelModules = [ "xhci_pci" "nvme" "usb_storage" "sd_mod" "rtsx_pci_sdmmc" ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{ device = "/dev/disk/by-uuid/b90bca12-d251-46e0-a407-f0b1ec87970b";
fsType = "ext4";
};
fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/50C8-A123";
fsType = "vfat";
};
swapDevices =
[ { device = "/dev/disk/by-uuid/916812ug-56b8-417b-bdf1-1bdee27d4499"; }
];
nix.maxJobs = lib.mkDefault 8;
powerManagement.cpuFreqGovernor = lib.mkDefault "powersave";
}
</code></pre>
<p>To add a swap file, it is as simple as adding an additional <a href="https://nixos.org/nix/manual/#idm140737318002592">set</a> to
the list and re-building.</p>
<pre data-lang="nix" class="language-nix "><code class="language-nix" data-lang="nix"> swapDevices = [
{
device = "/dev/disk/by-uuid/916812ug-56b8-417b-bdf1-1bdee27d4499";
priority = 100;
size = null;
}
{
device = "/swapfile";
priority = 0;
size = 2048;
}
];
</code></pre>
<p>Note that the swap partition is given higher priority. <a href="https://serverfault.com/questions/25653/swap-partition-vs-file-for-performance">The general
consensus</a> seemed to be that the partition will have better
performance than the swap file.</p>
<h1 id="re-sizing-the-swap-partition">Re-sizing the swap partition</h1>
<p>With the easy part over, we can move on to the second objective of re-sizing our
partitions. This isn't actually too difficult either. It can be broken down
into a few steps: update the partition table; resize the adjacent drive; then
update (once again) <code>/etc/nixos/hardware-configuration.nix</code>.</p>
<p>Start by making sure the swap device is de-activated and the root partition is
unmounted. To do this, I simply booted into the NixOS live image I had prepared
from <a href="https://rycwo.dev/archive/nixos-series-001-dual-boot/">Part 1</a> and manipulated the partitions from there. The following
may come in handy.</p>
<pre><code>swapoff /dev/disk/by-label/swap
umount /dev/disk/by-label/nixos
</code></pre>
<p>To re-size the partitions I opt for <code>gdisk</code>.</p>
<pre><code># gdisk /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help):
</code></pre>
<p>Before re-sizing, the table shows my swap with a size of <code>~2GB</code>.</p>
<pre><code>Command (? for help): p
Disk /dev/nvme0n1: 500118192 sectors, 238.5 GiB
Model: PC401 NVMe SK hynix 256GB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 399YGH4A-4E93-4AE1-A42A-A63649JREDEE
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1333247 650.0 MiB EF00 EFI system partition
2 1333248 1595391 128.0 MiB 0C01 Microsoft reserved ...
3 1595392 206395391 97.7 GiB 0700 Basic data partition
4 206395392 495802367 138.0 GiB 8304 Linux x86-64 root (/)
5 495802368 500118158 2.1 GiB 8200 Linux swap
</code></pre>
<p>Re-size the swap partition by first deleting both the root and the swap
partitions. This is followed by re-creating the deleted entries with the new
desired sizes. Take care to ensure the <strong>start sector for your root partition is
the same as it previously was</strong> otherwise you may bork your file system. There
are ways to shift partitions (<code>dd</code> can be used to store/load the contents of a
file system), but perhaps that is for another time. Remember to commit to the
table with <code>w</code>.</p>
<pre><code>Number Start (sector) End (sector) Size Code Name
1 2048 1333247 650.0 MiB EF00 EFI system partition
2 1333248 1595391 128.0 MiB 0C01 Microsoft reserved ...
3 1595392 206395391 97.7 GiB 0700 Basic data partition
4 206395392 498948095 139.5 GiB 8304 Linux x86-64 root (/)
5 498948096 500118158 571.3 MiB 8200 Linux swap
</code></pre>
<p>With the partitions re-sized, check the file system is consistent before
re-sizing it to match the new partition sizes.</p>
<pre><code># e2fsck -f /dev/disk/by-label/nixos
# resize2fs /dev/disk/by-label/nixos
</code></pre>
<p>Finally, re-create the swap device with <code>mkswap</code> and update the UUID under
<code>swapDevices</code> in your <code>/etc/nixos/hardware-configuration.nix</code> alongside the swap
file entry.</p>
<hr />
<p>I found this little exercise quite fun and educational, but I still have much to
learn in way of partition tables, file systems, etc. Hopefully, with this, you
not only know how to configure swap devices in NixOS but can also manipulate
partitions/file systems to a certain degree. Now, onto <a href="https://rycwo.dev/archive/nixos-series-003-configuration-primer/">Part 2</a>.</p>
<h1 id="helpful-links">Helpful links</h1>
<ul>
<li><a href="https://access.redhat.com/articles/1190213">redhat article on fdisk</a></li>
<li><a href="https://access.redhat.com/articles/1196333">redhat article on resizing a file system</a></li>
</ul>
Diving Into NixOS (Part 1): Dual-Booting On A New Laptop2018-07-29T00:00:00+00:002018-07-29T00:00:00+00:00https://rycwo.dev/archive/nixos-series-001-dual-boot/<h1 id="nixos-and-windows-10-on-the-xps-13-9370">NixOS and Windows 10 on the XPS 13 9370</h1>
<p>I recently purchased the <a href="https://www.notebookcheck.net/Dell-XPS-13-9370-i7-8550U-4K-UHD-Laptop-Review.296596.0.html">XPS 13 9370</a> (Notebookcheck felt the least
biased) and have been spending the past couple of weeks tinkering and playing
around with my dev environment to see what felt the best. It turned out that
dual-booting <a href="https://nixos.org/">NixOS</a> (or any Linux distro for that matter) and Windows 10
side-by-side was non-trivial given the factory laptop configuration.</p>
<p>This will be a series of posts that aims to document my process so that those
interested can get up-and-running with a bit less pain. Note that this first
section goes into what went astray purely for contextual reasons, the helpful
instructions follow-on after. I should also mention that as always, the <a href="https://wiki.archlinux.org/">Arch
Wiki</a> has greatly facilitated the learning process.</p>
<h1 id="out-of-the-box">Out-of-the-box</h1>
<p>If you chose to have Windows 10 installed by default, the XPS 13 should come
with about five partitions. Running Windows 10 Disk Management should show
something along the lines of <code>~512MB</code> for the <a href="https://wiki.archlinux.org/index.php/Partitioning#.2Fboot">boot partition</a>,
<code>~220GB</code> for Windows 10, and three other mystery partitions. I could not stand
having, unexplained, occupied space on my drive, so with a bit of help from a
colleague, <code>diskpart</code>, and <a href="https://www.dell.com/community/XPS-Desktops/Too-many-Partitions-on-hard-drive/td-p/5706817">the internet</a>, we found out
that said mystery partitions contained Dell's factory install data for recovery
purposes.</p>
<p>Going forward, I shrunk the Windows 10 partition down to <code>~90GB</code> to create
<code>~130GB</code> of unallocated space to install <a href="https://nixos.org/">NixOS</a> in. I would only be
using Windows sparsely for some light gaming (think Subset Game's
<a href="https://subsetgames.com/ftl.html">FTL</a>), and perhaps a touch of Photoshop, etc.
so I didn't need too much space there.</p>
<h2 id="secure-boot-sata-mode-and-bitlocker-troubles">Secure Boot, SATA mode, and Bitlocker troubles</h2>
<p>Against the instruction of the <a href="https://nixos.org/nixos/manual/index.html#sec-booting-from-usb">NixOS manual</a>, I used
<a href="https://rufus.akeo.ie/">Rufus</a> to create a bootable USB drive with the latest <strong>minimal</strong>
(no-GUI) disk image of NixOS (18.03 at the time of writing). Rufus settings were
pretty-much default - I tested both <a href="https://en.wikipedia.org/wiki/Master_boot_record"><strong>MBR</strong></a> and <a href="https://en.wikipedia.org/wiki/GUID_Partition_Table"><strong>GPT</strong></a> partition
schemes and either worked fine for booting the USB drive in UEFI mode, which was
what I wanted.</p>
<p>In order to boot via the USB drive, some UEFI options were changed. Most
importantly, <strong>Secure Boot</strong> was turned <strong>off</strong>. This is covered in more detail
in the latter instructions. Once I was able to boot into NixOS on the USB drive
I began to follow the instructions in the <a href="https://nixos.org/nixos/manual/index.html#sec-installation">manual</a>.</p>
<p>Right off-the-bat, <code>lsblk</code> showed the <strong>only</strong> drive available (under
<code>/dev/sda</code>) was my USB drive (I've had to make up the output here as I am
recalling from memory).</p>
<pre><code>NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 259:0 0 16G 0 disk
├─sda1 259:1 0 ??M 0 part /boot
├─sda2 259:2 0 16G 0 part /
</code></pre>
<p>Where was my internal drive? To my understanding, it turned out the <a href="https://wiki.archlinux.org/index.php/Solid_State_Drive/NVMe">default
Linux drivers included in the kernel</a> do not support NVMe SSDs
unless the <strong>SATA mode</strong> for the drive is set to <strong>AHCI</strong>. This setting can be
controlled via the UEFI options, however, changing the mode makes Windows 10
inaccessible unless it is started in Safe Mode. Further exacerbating the issue,
turning off Secure Boot meant Bitlocker would kick-in whenever I tried to start
my Windows 10 partition. This required me to log on to my Microsoft account on
another device and copy a large number of digits to proceed with the boot
process.</p>
<p>Unfortunately, this long string of obstacles ultimately led me to re-partition
and format my SSD, before making a clean installation of Windows 10 alongside
NixOS, with my SATA mode set to AHCI from the get-go. In hindsight, this was
definitely the right choice as I decided to free up the space occupied by the
recovery partitions and gain control over my installation options and drivers.</p>
<h1 id="installing-everything-fresh">Installing everything, <em>fresh</em></h1>
<p>So, how do we go about having a clean dual-boot set-up on the XPS 13 9370? The
process is pretty straightforward if you have ever done anything similar. If you
haven't, I would be wary about tinkering too much, on the off-chance you "brick"
your laptop. I am not responsible for such scenarios.</p>
<p>The installation begins to become interesting when <a href="https://rycwo.dev/archive/nixos-series-003-configuration-primer/">configuring NixOS</a>.
For this setup, I decided not to encrypt any of my partitions, though it is
<a href="https://wiki.archlinux.org/index.php/Disk_encryption">something I would like to explore in the future</a>.</p>
<h2 id="windows-10">Windows 10</h2>
<p>To begin with, prepare a bootable USB drive for installing Windows 10. I'll
assume there are enough resources out on the internet to learn how to do this.
TLDR, run <a href="https://www.microsoft.com/en-us/software-download/windows10ISO">this tool</a>. Remember to keep a copy of the <a href="https://www.killernetworking.com/driver-downloads">Killer
wireless drivers</a> on the USB drive. I made the mistake of not
doing so beforehand and had to USB-tether via my phone to download them once the
install had completed.</p>
<p>Before beginning with the installation, you will need to change some UEFI
options. To do this, hit <code>F2</code> when you first see the Dell logo appear on
start-up. It is recommended not to hold the button (it may be interpreted as
being stuck), so I tap it rhythmically instead. The following options are
necessary:</p>
<ul>
<li>Secure Boot
<ul>
<li>Secure Boot Enable
<ul>
<li><strong>Disabled</strong></li>
</ul>
</li>
</ul>
</li>
<li>System Configuration
<ul>
<li>SATA Operation
<ul>
<li><strong>AHCI</strong></li>
</ul>
</li>
</ul>
</li>
</ul>
<p>You may also have to set <em>Fastboot</em> (under <em>POST Behavior</em>) to <strong>Thorough</strong>.
Only the right-hand side USB-C port can be used to boot into the USB drive. To
enable the use of the ports on the left, <strong>Thunderbolt Boot Support</strong> (under
<em>System Configuration, USB/Thunderbolt Configuration</em>) should additionally be
enabled. To choose the boot drive, tap <code>F12</code> when the Dell logo first appears.</p>
<p>Partitioning and formatting can be done during the Windows 10 installation. The
only things to keep-in-mind are to preserve the boot partition (this was the
first partition, in my case), and that Dell may-or-may-not be happy to provide
support if the recovery partitions are wiped. This time around, I made my
Windows partition <code>~100GB</code>, and left the rest unallocated, leaving <code>~140GB</code> for
NixOS.</p>
<p>After the installation is complete, install the WiFi drivers, then the rest of
the XPS 9370 drivers. A convenient list can be found <a href="http://en.community.dell.com/techcenter/enterprise-client/w/wiki/12458.xps-13-9370-windows-10-driver-pack">here</a>,
and <a href="https://www.dell.com/support/home/uk/en/ukbsdt1/product-support/product/xps-13-9370-laptop/drivers">here</a>. Everything seemed to work fine aside from:</p>
<ul>
<li>Start-menu Personalization
<ul>
<li>Any settings/changes I made to the start-menu were not being saved.
Rummaging on forums suggested it would fix itself after a couple of days,
which it did.</li>
</ul>
</li>
<li>Night Light
<ul>
<li>This option is greyed out for some reason.</li>
</ul>
</li>
</ul>
<h2 id="nixos">NixOS</h2>
<p>Once Windows 10 is on its feet, we can replace the Windows 10 image with the
NixOS installer. As suggested earlier, using <a href="https://rufus.akeo.ie/">Rufus</a> with default
settings works fine. If given the option, use <code>dd</code> to perform the write. Booting
into the installer is the same as Windows 10.</p>
<p>For the most part, I followed the <a href="https://nixos.org/nixos/manual/index.html#sec-installation">NixOS manual</a> to do the
installation. Some pointers:</p>
<ul>
<li>
<p><code>loadkeys uk</code> for British keyboard layout.</p>
</li>
<li>
<p><code>ip link</code> to get the list of network interfaces. I have:</p>
<pre><code>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
link/ether 9c:b6:d0:8e:7d:d3 brd ff:ff:ff:ff:ff:ff
</code></pre>
<p>Where <code>wlp2s0</code> is the name of wireless interface.</p>
<p>In which case <code>wpa_supplicant -B -i wlp2s0 -c <(wpa_passphrase "foo" "bar")</code>,
substituting <code>foo</code> for your the SSID (name) of your network, and <code>bar</code> for
the password, will get you connected.</p>
</li>
<li>
<p>The internal SSD shows up under <code>/dev/nvme0n1</code>. I prefer <code>gdisk</code> for managing
my partitions, it works well with GPT. The <a href="https://wiki.archlinux.org/index.php/Partitioning">Arch Wiki</a>
has a lot of <a href="https://wiki.archlinux.org/index.php/Partitioning#Example_layouts">pragmatic example layouts</a>. Here
is my current table:</p>
<pre><code>Number Start (sector) End (sector) Size Code Name
1 2048 1333247 650.0 MiB EF00 EFI system partition
2 1333248 1595391 128.0 MiB 0C01 Microsoft reserved ...
3 1595392 206395391 97.7 GiB 0700 Basic data partition
4 206395392 495802367 138.0 GiB 8304 Linux x86-64 root (/)
5 495802368 500118158 2.1 GiB 8200 Linux swap
</code></pre>
<p>As I work with large amounts of in-memory data, potentially boasting <code>> 8GB</code>
in size, I opt-in to a couple GB of swap (woes of CG/VFX software)!</p>
</li>
<li>
<p>As Nix installs <strong>all</strong> software, including user packages, under <code>/nix/store</code>,
I would suggest making your <code>/root</code> larger if you are considering separating
<code>/home</code>.</p>
</li>
<li>
<p>Remember to <code>mount</code> all your partitions before running
<code>nixos-generate-config --root /mnt</code>. If you've forgotten any, you can mount
them after and run the NixOS command again and it should update the file at
<code>/etc/nixos/hardware-configuration.nix</code>. I recommend inspecting this file to
see the options the installer detected and verify the disk UUIDs are correct.</p>
</li>
</ul>
<p>Aside from the UEFI options to-or-already set in <code>/etx/nixos/configuration.nix</code>.
The final thing to note before running <code>nixos-install</code> is the
<code>boot.loader.grub.useOSProber</code> option. Setting this will allow GRUB to detect
our Windows 10 partition and provide it as an option on start-up. We want this.
Also, unless you've configured a <a href="https://wiki.archlinux.org/index.php/Display_manager">display manager</a> and/or
<a href="https://wiki.archlinux.org/index.php/Desktop_environment">desktop environment</a>, you will only be able to log-in to
the command line (no <a href="https://en.wikipedia.org/wiki/X_Window_System">X</a>)!</p>
<p>Of course, if you realize you've misconfigured something, the magic of Nix is
the ability to re-build your configuration or rollback to the last working
derivation, but more on this in the next post.</p>
<p>On a side note - I occasionally found that after starting the NixOS installation
process, the USB drive would somehow become corrupt. Reading the drive in Linux
or Windows seemed to fail. The most effective method of recovering the drive
that I found, was to use <a href="https://www.osforensics.com/tools/write-usb-images.html">ImageUSB</a> to <strong>zero-out</strong> the drive, before
using Rufus to re-write the NixOS image for me.</p>
<hr />
<p>Hopefully, this post has been insightful/helpful in some way. For those of you
that are interested, I aim to go over my own configuration and the decisions I
made in the next post, which is available <a href="https://rycwo.dev/archive/nixos-series-003-configuration-primer/">here</a>.</p>
<h1 id="helpful-links">Helpful links</h1>
<ul>
<li><a href="https://nixos.org/">NixOS Home page</a></li>
<li><a href="https://nixos.wiki/wiki/Dual_Booting_NixOS_and_Windows">NixOS Wiki entry on dual-booting</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Dell_XPS_13_(9360)">Arch Wiki entry on Dell XPS 13 (9360)</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Dell_XPS_13_(9370)">Arch Wiki entry on Dell XPS 13 (9370)</a></li>
<li><a href="https://zimbatm.com/journal/2016/09/09/nixos-window-dual-boot/">Blog entry by Zimbatm</a></li>
</ul>
<h1 id="update">Update</h1>
<p>A friend mentioned <a href="https://wiki.archlinux.org/index.php/Swap#Swap_file">swapfiles</a> which I had completely forgotten
about. This is helpful if you have limited disk space - which my laptop does -
because the file can be dynamically re-sized on a needs basis. I've <a href="https://rycwo.dev/archive/nixos-series-002-swapfiles/">made a
post</a> on my process of updating the partition table and expanding the
filesystem at <code>/</code>.</p>
A Note On Semantics2017-02-15T00:00:00+00:002017-02-15T00:00:00+00:00https://rycwo.dev/archive/code-semantics/<h1 id="what-are-semantics">What are "semantics"?</h1>
<p>Semantics are deeply ingrained with most of what we interact with.
Traditionally, the term "semantics" refers to the <em>meanings of</em> and
<em>relationships between</em> words. In a broader context, I prefer to think that
semantics are implications from design.</p>
<p>Everything we use in our daily lives, including words, have been designed in one
way or another. The design of these daily "things", guide us to use them in a
specific way.</p>
<blockquote>
<p>Apparently, the design of a bowler hat implies butt apparel.
- <a href="https://i.imgur.com/g9SAfuI.png">Gunther, from Futurama</a></p>
</blockquote>
<p>One of my favorite examples of good design for semantics is that of a door. Many
doors can be pushed from one side and pulled from the other. Given appropriate
signs, it should be obvious which side expects what behavior. Unfortunately,
despite clear signage, it is still possible for users to try to pull or push on
the wrong side!</p>
<p>A well-designed door, however, should not even need to help of signs to imply
its usage. By removing the handle on the "push side", we leave the user no
choice but to push the door to open it. Similarly, by adding a handle on the
"pull side", we give the user the ability to pull the door to open it.</p>
<h1 id="how-does-this-apply-to-software-development">How does this apply to software development?</h1>
<p>It is easy to see why designing with semantics in mind is important. With
software development, we can apply a similar mindset to produce intuitive APIs.</p>
<p>A helpful way to approach software design is to first think of how the user will
interact with your software. Aside from planning, drawing diagrams, and other
useful planning activities, a valuable exercise can be to write a simple
script/use case simulating the use of your final product.</p>
<p>An even more productive exercise would be to write the use case as an automated
test. By no means am I suggesting <a href="https://en.wikipedia.org/wiki/Test-driven_development">Test Driven Development</a> is the most
effective way to work, but it helps force oneself into thinking carefully about
the function calls, class instantiations, etc. a user would have to make to
achieve their goal using your API. If the steps you write in the test are
already cumbersome, then you have good reason to believe the user will feel the
same too!</p>
<p>Good design is important from the highest to the most fundamental levels. Take
the Python language itself as an example. The semantics of the language make it
quite human readable.</p>
<pre data-lang="python" class="language-python "><code class="language-python" data-lang="python">with open("/path/to/foo", "r") as file:
# Do something.
</code></pre>
<p>In this snippet, we use a <a href="https://docs.python.org/2/library/contextlib.html#contextlib.contextmanager">Context Manager</a> to safely open a
file to read. The keyword <code>with</code> implies we are performing an action with
something, making the code incredibly natural to read.</p>
<p>Ultimately, there is an infinite number of perspectives we can take when
designing for semantics. Taking a page from <a href="https://www.python.org/dev/peps/pep-0020/">The Zen of Python</a>:</p>
<blockquote>
<p>There should be one-- and preferably only one --obvious way to do it.</p>
</blockquote>
<p>To reiterate, make your APIs <strong>obvious</strong>, a user should not have to choose
between five different ways of doing the same thing. Just give them the one,
simple, way!</p>