Creating CSS APIs without JavaScript With the datasette-css-properties plugin
Simon Willison has a project called Datasette, an open source multi-tool for exploring and publishing data. I’m not sure I’m qualified to explain it, but it’s like a tool to make handling data easier and doing more — through the web — with data you have. Like making that data queryable and giving it an API.
I would think, typically, you’d get the results of an API call against your data in something useful, like JSON. But Simon made a plugin that outputs the results as CSS custom properties instead, and blogged it:
It’s very, very weird—it adds a
.css
output extension to Datasette which outputs the result of a SQL query using CSS custom property format. This means you can display the results of database queries using pure CSS and HTML, no JavaScript required!
Here’s what I said just recently in “Custom Properties as State”:
This makes me think that a CDN-hosted CSS file like this could have other useful stuff, like today’s date for usage in pseudo content, or other special time-sensitive stuff. Maybe the phase of the moon? Sports scores?! Soup of the day?!
And Simon is like, how about roadside attractions?
My brain automatically worries about the accessibility of that, but… aren’t pseudo-elements fairly and reliably read in screen readers these days? You still can’t select the text though, or find-on-page, which are both usability and accessibility issues, so don’t consider this like a real thing that you really do for production work with unknown users.
His blog post demonstrates a slightly more dynamic example where the time of day outputs a different color. That makes me think of @property
and declaring types for custom properties. I think this gets a smidge more useful when you can use the values that come back as specific syntaxes.
The post Creating CSS APIs without JavaScript With the datasette-css-properties plugin appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.
Styling Code In and Out of Blocks
We’ll get to that, but first, a long-winded introduction.
I’m still not in a confident place knowing a good time to use native web components. The templating isn’t particularly robust, so that doesn’t draw me in. There is no state management, and I like having standard ways of handling that. If I’m using another library for components anyway, seems like I would just stick with that. So, at the moment, my checklist is something like:
- Not using any other JavaScript framework that has components
- Templating needs aren’t particularly complex
- Don’t need particularly performant re-rendering
- Don’t need state management
I’m sure there is tooling that helps with these things and more (the devMode episode with some folks from Stencil was good), but if I’m going to get into tooling-land, I’d be extra tempted to go with a framework, and probably not framework plus another thing with a lot of overlap.
The reasons I am tempted to go with native web components are:
- They are native. No downloads of frameworks.
- The Shadow DOM is a true encapsulation in a way a framework can’t really do.
- I get to build my own HTML element that I use in HTML, with my own API design.
It sorta seems like the sweet spot for native web components is design system components. You build out your own little API for the components in your system, and people can use them in a way that is a lot safer than just copy and paste this chunk of HTML. And I suppose if consumers of the system wanted to BYO framework, they could.
So you can use like <our-tabs active-tab="3">
rather than <div class="tabs"> ... <a href="#3" class="tab-is-active">
. Refactoring the components certainly gets a lot easier as changes percolate everywhere.
I’ve used them here on CSS-Tricks for our <circle-text>
component. It takes the radius as a parameter and the content via, uh, content, and outputs an <svg>
that does the trick. It gave us a nice API for authoring that abstracted away the complexity.
So!
It occurred to me a “code block” might be a nice use-case for a web component.
- The API would be nice for it, as you could have attributes control useful things, and the code itself as the content (which is a great fallback).
- It doesn’t really need state.
- Syntax highlighting is a big gnarly block of CSS, so it would be kinda cool to isolate that away in the Shadow DOM.
- It could have useful functionality like a “click to copy” button that people might enjoy having.
Altogether, it might feel like a yeah, I could use this kinda component.
This probably isn’t really production ready (for one thing, it’s not on npm or anything yet), but here’s where I am so far:
Here’s a thought dump!
- What do you do when a component depends on a third-party lib? The syntax highlighting here is done with Prism.js. To make it more isolated, I suppose you could copy and paste the whole lib in there somewhere, but that seems silly. Maybe you just document it?
- Styling web components doesn’t feel like it has a great story yet, despite the fact that Shadow DOM is cool and useful.
- Yanking in pre-formatted text to use in a template is super weird. I’m sure it’s possible to do without needing a
<pre>
tag inside the custom element, but it’s clearly much easier if you grab the content from the<pre>
. Makes the API here just a smidge less friendly (because I’d prefer to use the<code-block>
alone). - I wonder what a good practice is for passing along attributes that another library needs. Like is
data-lang="CSS"
OK to use (feels nicer), and then convert it toclass="language-css"
in the template because that’s what Prism wants? Or is it better practice to just pass along attributes as they are? (I went with the latter.) - People complain that there aren’t really “lifecycle methods” in native web components, but at least you have one: when the thing renders:
connectedCallback
. So, I suppose you should do all the manipulation of HTML and such before you do that finalshadowRoot.appendChild(node);
. I’m not doing that here, and instead am running Prism over the wholeshadowRoot
after it’s been appended. Just seemed to work that way. I imagine it’s probably better, and possible, to do it ahead of time rather than allow all the repainting caused by injecting spans and such. - The whole point of this is a nice API. Seems to me thing would be nicer if it was possible to drop un-escaped HTML in there to highlight and it could escape it for you. But that makes the fallback actually render that HTML which could be bad (or even theoretically insecure). What’s a good story for that? Maybe put the HTML in HTML comments and test if
<!--
is the start of the content and handle that as a special situation?
Anyway, if you wanna fork it or do anything fancier with it, lemme know. Maybe we can eventually put it on npm or whatever. We’ll have to see how useful people think it could be.
The post Styling Code In and Out of Blocks appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.
Avoiding recursive useEffect hooks in React
It’s fair to say that React 16.8 and the introduction of
hooks has really changed how we
write React. Hooks are one of those APIs that make you realise the flaws of the
previous approach after you stop using it. I remember being very skeptical of
hooks when they were first released, not thinking that the previous class based
design had many flaws, but I’ve since come to realise I was very wrong, and
hooks are a vast improvement on how we build React components. If you’re
interested in comparing the old vs the new, I wrote a
blog post refactoring a component to use hooks
that offers a nice comparison.
One area that has taken me some time to get used to is the dependency array of
the useEffect
hook. This lets you tell React when it should rerun the effect:
useEffect(
() => {
console.log('I run when `a` changes')
},
[a]
)
This useEffect
will be run:
- when the component is first mounted
- whenever the variable
a
changes.
But this lead me to quite often end up with recursive calls to setEffect
,
where I’d need to rely on some state in order to update its value:
const [count, setCount] = useState(0)
// this is going to go on forever
// because the effect relies on the `count` variable
// and then updates the `count` variable
// which triggers the effect
// and so on...
useEffect(
() => {
setCount(count + 1)
},
[count]
)
This is a contrived example for the purpose of demonstration, but I also had
bigger examples where we had an object in state with many keys and values, and
we needed to read in the object and update one part of it:
const [userData, setUserData] = useState({
name: 'Jack',
friends: ['alice', 'bob'],
})
// also runs infinitely for the same reasons as above
useEffect(
() => {
const newUser = {
...userData,
friends: [...userData.friends, 'charlie'],
}
setUserData(newUser)
},
[userData]
)
The solution lies in how we call the set state functions (in the prior code
example, setUserData
is the “set state” function). There are two forms to
these functions:
setUserData(newUser)
setUserData(function(oldUser) {
const newUser = {}
return newUser
})
The first takes the new value and sets it. The second takes a function that is
called with the old value and is expected to return the new value. Let’s take
the previous useEffect
code example and update it to use the second form of
the set state function:
const [userData, setUserData] = useState({
name: 'Jack',
friends: ['alice', 'bob'],
})
// doesn't run infinitely! 👌
useEffect(() => {
setUserData(oldUser => {
const newUser = {
...oldUser,
friends: [...oldUser.friends, 'charlie'],
}
return newUser
})
}, [])
Do you notice what’s different here? We no longer have to depend on userData
,
because we read it from the callback function that we give to the set state
function! This means that our useEffect
call is free to modify and set the new
user data without fear of recursion because it reads the old value by being
given it via the set state function. Therefore we can lose it from our
useEffect
dependencies array, meaning that useEffect
won’t rerun when it
changes!
My experience of this was that once I spotted this trick it made the useEffect
hook really click in my head. I’ve come to use the set state function variant
much more frequently – in fact, nearly exclusively inside useEffect
calls, and
I recommend giving it a go.
Making impossible states impossible: data structures in React
One of the things I like to spend a lot of time on is data structures. It’s one
of the first things I think about when building something: what data do I have
to work with, and what’s the best format for it to be in?
In my experience if you can get the data format correct everything else should
fall into place; a data structure that allows you to read and manipulate the
data easily is going to be much nicer to work with. You want the data structure
to do as much of the work for you as it can and it should work with you and not
feel like it gets in your way.
Interestingly, I think because of the strictly typed nature of the languages, I
find myself taking this approach much more when I’m working with Elm or
TypeScript: something about the presence of types leads me to think about
defining the types I’ll use through my application – and this leads to me
thinking about data structures. Today we’re going to look at a JavaScript
example where we’ll strongly consider the datatype that we use to solve a
problem.
Making impossible states impossible
There is a very popular Elm talk titled
“Making Impossible States Impossible”
by Richard Feldman which has become my
reference of choice for this topic. I highly recommend watching the video – even
if you don’t like or know Elm – because the approach transcends any given
language. The example for this blog post is also taken from that talk because
it’s perfect for what I want to discuss, so thank you Richard!
Tabs
Every frontend developer has built a tabbed interface at one point in their
lives, and it’s these that we’ll look at today. We’ll have some tabs at the top
of the page and then show the content for the currently active tab below it.
Today I’ll be using React for the UI but this is not important for the topic –
feel free to swap React for your framework of choice 👍
We have two bits of information that we have as data:
- all the tabs: their title and their content
- some data to know which tab is active and therefore which tab to highlight and
which content to show
Feel free to think for a moment about how you’d model this data.
This is my first pass, and I’m confident that I’m not the only one who would
take this approach:
const [activeIndex, setActiveIndex] = React.useState(0)
const tabs = [
{ title: 'Tab One', content: 'This is tab one' },
{ title: 'Tab Two', content: 'This is tab two' },
{ title: 'Tab Three', content: 'This is tab three' },
]
I’m hardcoding
tabs
here but let’s imagine in reality we’re building a Tab
library that others will consume and pass in the tabs.
The critical question: what impossible states does this data structure permit?
When we’re thinking about data structures and how to improve them this is the
question you want to be asking yourself. Take the data structure that you’ve
come up with and see if you can set values that cause impossible states. For
example, I can:
const [activeIndex, setActiveIndex] = React.useState(4)
// omitted the contents to save space
const tabs = [{}, {}, {}]
In this state I’ve set the activeIndex
to 4
(which would mean the 5th tab as
arrays are zero-indexed in JavaScript), but we only have three tabs. So this
state is impossible!
At this point you might be thinking that it doesn’t matter that this state
could exist, because we can write code to ensure that it can’t exist. And that
is true: we could write code to ensure that activeIndex
never gets set a value
that is out of bounds. And we could ensure all our click event listeners for our
tabs only set valid activeIndex
values. But if we had a data structure that
didn’t allow this impossible state, we wouldn’t have to write any of the code
we just spoke about. And that’s the value of thinking of data structures that
ban impossible states: they remove even the slightest chance of certain bugs
ever occurring because the data doesn’t allow them to.
In JavaScript land technically every data structure we come up with will allow
an invalid state because we could set any value toundefined
ornull
. This
is where the typed languages have an edge: when you can ensure at compile time
that a certain value must exist, you can create data structures that truly
make impossible states impossible. For today’s post we’ll take the leap of
hoping that values that we expect to be present are indeed present.
Whilst it’s very hard to come up with a data structure that avoids any
impossible state, we can work on creating data structures that avoid obviously
invalid states, such as the problem above.
An alternative data structure
So if we want to avoid the problem of the activeIndex
being an invalid number,
how about we remove it entirely and track which tab is active:
const [activeTab, setActiveTab] = React.useState(tabs[0])
const [restOfTabs, setRestOfTabs] = React.useState(tabs.slice(1))
In this approach we split the actual tab object out and remember which one is
active. This does mean we will need a new key on each tab to know which order to
render them in, as we’ve lost the nice ordered array they were in, but maybe
this is a price worth paying for this data structure. Is this better or worse
than the previous attempt? And crucially: does it allow any invalid states?
If we assume that our code won’t go rogue and set values to null
(as
previously mentioned, this is where some types and a compiler would come in
handy), it’s harder to get this data into an invalid state. When a user clicks
on a tab we can swap which tab is the activeTab
. However there is a big red
flag to me here: two co-located useState
calls with very related bits of data.
This data structure opens us up to problems by storing two values in the state
together. Whenever you see two state values that are tightly related you are
likely to be opening yourself up to bugs where these values get out of sync. You
can either rethink how you are modelling your data, or reach for the
useReducer
hook,
which allows you to update multiple bits of state at once.
The fact that this data structure loses a key feature of our tabs – their
ordering – is also a red flag. We’ll have to either ask the consumer of our
module to pass in objects with an order
key, or do it ourselves. When you find
yourself having to mutate data to add properties you need because your data
structure doesn’t provide it, that’s a sign that maybe the data structure isn’t
quite right.
Zip lists
Let’s look at a final data structure: the zip list. The zip list breaks down a
list where we care about the active state into three parts:
// before:
const tabs = [tabOne, tabTwo, tabThree]
// after:
const tabs = {
previous: [tabOne],
current: tabTwo,
next: [tabThree],
}
The advantages of this approach over our last two are:
- We keep the ordering of the tabs and can easily construct an array of them
([...tabs.previous, tabs.current, ...tabs.next]
). - We now have to have a current tab at all times. And because we’ll construct
this data structure from the initial array of tabs the user gives us, we can
be pretty confident of avoiding some of the impossible states this data
structure does allow (duplicated tabs). - All our data is in one object: the previous attempt split the tabs up into
two pieces of state which could more easily get out of sync: here we’ve got
just one.
Notice how we still have impossible states here:
tabs.previous
could contain
the same tab astabs.current
, which would be a bug. But because it’s all in
one piece of data that we are going to write code to manipulate we can have
close control over this and those bugs are less likely than two individual
pieces of state becoming misaligned.
Let’s start our initial zip list implementation and see how we go. I’ll create a
function that takes in the initial array, sets the first item as active (in the
future we might allow the user to tell us which tab is active) and then create
our data structure:
const zipList = initialArray => {
const [initialActive, ...restOfTabs] = initialArray
const zip = {
previous: [],
current: initialActive,
next: restOfTabs,
}
const setActive = zip => newActive => {
// TODO: fill this in
const newZip = zip
return apiForZip(newZip)
}
const apiForZip = zip => ({
asArray: () => [...zip.previous, zip.current, ...zip.next],
isActive: tab => zip.current === tab,
setActive: setActive(zip),
activeTab: () => zip.current,
})
return apiForZip(zip)
}
When creating custom data structures the key is to hide the raw data behind a
nice API. If you expose the raw data it’s hard to change that structure because
people might rely on it, and in a mutable language world like JavaScript people
could reach in and change your data in whatever way they like. Notice how the
zip
object is not exposed and instead we provide a small API.
In our React component we can still map over tabs by doing
tabs.asArray().map(...)
, and we can determine the active tab via the
isActive()
function. The activeTab()
function lets us fetch the active tab
so we can show its content on the page. The final piece of the jigsaw is
setActive
, which needs a bit more thought. This is where we are going to write
more code than if we’d have taken the activeIndex
approach, but we’re trading
that off against the higher confidence we have in this data structure.
Programming is all about trade-offs, after all!.
So we can move the tabs in our component into a piece of state:
const [tabs, setTabs] = React.useState(
zipList([
{ title: 'Tab One', content: 'This is tab one' },
{ title: 'Tab Two', content: 'This is tab two' },
{ title: 'Tab Three', content: 'This is tab three' },
])
)
And we can use the setTabs
function to update the state when a user clicks on
a tab (ensuring that our zip list’s API returns a new zip list from the
setActive
call):
{
tabs.asArray().map(tab => (
<li
key={tab.title}
onClick={() => setTabs(tabs.setActive(tab))}
className={`${tabs.isActive(tab) ? 'border-red-800' : 'border-gray-800'}`}
>
{tab.title}
</li>
))
}
The setActive
function takes a bit of thought to get right in terms of
updating the values. Let’s say we have this state:
const zip = {
previous: [tabOne, tabTwo],
current: tabThree,
next: [],
}
And now we click on tabOne
. We need to make the data structure become:
const zip = {
previous: [],
current: tabOne,
next: [tabTwo, tabThree],
}
To do this we can follow a set of steps:
- Figure out where the new active tab is:
previous
ornext
. For this
example it’s in theprevious
state. - We now need to split
previous
into two lists: the previous items that
appear before the new active tab, and the items that appear after it. We
need this because the ones that appear before need to stay in the previous
list, but the items that appear after the item that’s about to become active
need to go into the next list. - We can then construct the new zip:
const newZip = { previous: [...previousItemsBeforeActive], current: newActive, next: [...previousItemsAfterActive, zip.current, ...zip.next], }
And with that we now have a functioning set of tabs with a zip list
implementation 👍.
That was…a lot of work?!
That might feel like an awful amount of work to go through just to get some tabs
listed on the screen. And to some extent, it was! But we’ve definitely gained
benefits from doing this work. Firstly, the Zip List isn’t specific to tabs:
whenever you find yourself having a list of things where one is considered
active in some form, this data structure is a great one to reach for. And you
now have a reusable implementation of a zip list ready to be used whenever the
time comes.
I’ve lost count of the number of bugs I’ve had because an activeIndex
type
tracker got out of sync: in our zip list we don’t rely on any other data:
there’s one object that controls everything about which item is active. That’s
going to pay off in terms of bugs we’ve avoided, for sure.
Is building a data structure like this worth it every single time you have
some tabs and you want to show one as active? Possibly not – that’s up to you.
As always in programming, it depends. But I hope this blog post inspires you to
think more carefully about data structures and ask how you can structure them to
work with you and help rule out impossible states.
NPM Package
I have published the Zip List implementation (well, a slightly tweaked one) as
an npm package so you can use them without having to implement them! You can
find the repository on Github and
install it via npm or Yarn today 🎉:
yarn add @jackfranklin/zip-list
npm install @jackfranklin/zip-list
Getting started with GraphQL: what client to use?
When I first started working with GraphQL APIs my first challenge was to decide
what GraphQL frontend library I wanted to use. I can remember spending all
morning exploring all sorts of options, from small libraries like
graphql-request to slightly
larger ones like urql and finally the
most well known like Apollo. These are all great
libraries – in fact we use urql at work – but at this point in time I was
working with a tiny GraphQL library that I’d built for a side project and I
really didn’t need any complexity. I think I lost a good couple of hours trying
to decide before thinking: what if I made my own?
This post is not meant to criticise libraries: they provide a bunch of
features that many applications will want and need, but if you’re just getting
started, they might be overkill for your needs.
Do you need a library to use GraphQL?
I had in my head this mindset that making a request to a GraphQL API was
“special” and not something that I could do with the fetch
API, for example.
I’m not really sure where this came from but I think I’d seen so many talks
about Apollo and various client libraries doing all sorts of smart things I’d
ended up assuming that I’d use one of those. But Apollo packs in a vast array of
features that I really didn’t need on my side project. I wanted to make a
request and get the data. Concerns such as smart caching and cache invalidation
were not present for me.
When you’re starting to learn something it can be tempting to reach for
libraries to fill in gaps in knowledge but I highly recommend trying to avoid
doing this when possible. I’m very happy that I made the decision to write my
own tiny client because it plugged gaps in my knowledge and de-mystified how a
GraphQL API works. In this post I’ll talk through how to get started talking to
a GraphQL API just by using the fetch
API and nothing more.
A sample GraphQL API
We need a sample API for this and I’ve made one that lives on Heroku:
http:faker-graphql-api.herokuapp.com/graphql. This API returns some fake people
(all data is generated by Faker.js). It lets
us query for people and get their names:
{
people {
name
}
}
Returns an array of ten people and their names. This is the query we’re going to
use as our example today.
My dummy API is hosted on a free Heroku instance so please be patient if it
takes some time to boot up when you request it.
Making a request to a GraphQL API
It turns out there are some simple steps to follow to talk to a GraphQL
endpoint:
- All requests are
POST
requests - You should pass the
Content-Type
header asapplication/json
- The body of the request should contain a string which is the GraphQL query
As long as we follow those rules we can easily use fetch
to talk to the API.
Let’s do it!
const api = 'http:faker-graphql-api.herokuapp.com/graphql'
export const request = ({ query }) => {
return fetch(api, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
query,
}),
})
.then(response => response.json())
.then(result => {
console.log('got here!', result)
return result
})
}
The request
function takes an object and expects the query
key to contain
the raw GraphQL query. The fetch
API takes the URL and an object of options,
which are used to configure the request: we set method: 'POST'
and the
Content-Type
header as discussed and then use JSON.stringify({ query })
to
generate the body for the request, passing in the query
that was passed in to
our request
function. Finally, the GraphQL API will return JSON so we parse
the response before returning it (I’ve logged it just to aid debugging but feel
free to skip that!).
With that we can make our request:
request({
query: `{ people { name } }`,
})
And you should get some people back! 🎉.
If you only need to make basic requests in your app you could stop here and be
done. We’ve saved having to install, learn and ship in our bundle any additional
libraries. Of course this comes with less functionality – but for some projects
that might be just fine.
If you do need caching and more advanced features I’d highly recommend a well
tested, established library rather than rolling your own!
Supporting variables
Another feature of GraphQL is that queries can take variables. For example, the
fake API lets us find a single person by their ID:
query fetchPerson($id: Int!) {
person(id: $id) {
name
}
}
To support this our API needs to pass variables through as well that it includes
in the request:
export const request = ({ variables, query }) => {
return fetch(api, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
query,
variables,
}),
})
.then(response => response.json())
.then(result => {
console.log('got here!', result)
return result
})
}
And now our client supports variables:
request({
query: `query fetchPerson($id: Int!) {
person(id: $id) {
name,
}
}`,
variables: {
id: 1,
},
})
If this is all you need, or you’re not using React for your frontend, you can
stop here. This client will be plenty good enough to keep you going as you work
with and get more familiar with GraphQL. By working with your own
implementations first you’ll find you have a greater fundamental understanding
when swapping to a library, and you’ll understand the features the library
provides better.
A React hook!
Finally let’s see how easy it would be to wrap this up in a React hook for those
of you working with React.
If you’re not familiar with hooks, I wrote
an introduction to them which will help get
you up to speed.
Creating the hook is a case of wrapping our request
function in a
React.useEffect
hook and storing the response via React.useState
:
export const useGraphQL = ({ variables, query }) => {
const [data, setData] = React.useState(null)
React.useEffect(
() => {
request({ variables, query }).then(setData)
},
[variables, query]
)
return [data]
}
This hook is missing some useful features like tracking if we’re loading or
not, but I’ll leave that as an exercise to the reader 😃
We can use this hook within a component like so:
const [data] = useGraphQL({
query: `{ people { name } }`,
})
And it works! There is one gotcha though that I want to highlight. If you do
this:
const [data] = useGraphQL({
variables: {},
query: `{ people { name } }`,
})
You’ll cause an infinite loop of requests, which isn’t what we want! This is
because React.useEffect
has variables
as a dependency and every time it
changes it will cause the effect to re-run. Every re-render this code runs and
variables: {}
creates a new object every time which means React.useEffect
will re-run.
We can fix this by remembering to wrap our variables
in a React.useMemo
hook
to ensure that we only recalculate the variables if we need to:
const vars = React.useMemo(
() => {
return {
id: props.id,
}
},
[props.id]
)
const [data] = useGraphQL({
variables: vars,
query: `{ people { name } }`,
})
But this requires you to remember to do this every time. Instead what we can do
is convert the variables
within our useGraphQL
hook to a string, via
JSON.stringify
, and use that as the dependency to useEffect
:
const stringifiedVars = JSON.stringify(variables)
React.useEffect(
() => {
request({ variables, query }).then(setData)
},
[stringifiedVars, query]
)
❗️This isn’t the best solution but it is the easiest and will serve just fine
for most projects. It’s also similar to how the popular
urql works
although that uses the
fast-json-stable-stringify
to avoid some of the performance problems withJSON.stringify
.
Conclusion
Although this post has focused on GraphQL I hope that your main takeaway is to
resist diving straight for libraries. You can often get a long way with a few
lines of code you write yourself, particularly when learning a new technology.
This will help your understanding of the tech that you’re learning but also your
understanding of libraries: if you’ve written a library yourself, however small
and straight forward, you’re more likely to be able to follow how the more
complex libraries work.
A free video series on building web apps with Elm
If you’ve followed me on the internet for a while you’ll know that I’m a big fan
of Elm and I’ve written and spoken a fair bit about it.
There are some great guides for Elm out there but when I was learning I
struggled with being unable to find examples of how Elm apps were put together,
particularly as they got bigger.
So, now I’m a little more comfortable with Elm than I once was, I set about
recording a series that tries to show just that. My initial intentions were to
sell the videos as a course, but I’ve now decided to make every single video
entirely free and available on YouTube for you to enjoy.
You can get started with the playlist and watch all videos in order by
heading to YouTube
or watching here:
If you have any questions, Twitter is the best place to grab me 🙂
Saving manual work with babel-plugin-macros
babel-plugin-macros is a
project that I’ve followed with interest even though I’d never had a chance to
use it. Today that changed and I wanted to share my use case and my very
positive experience using it.
What is babel-plugin-macros?
The key feature of a Babel Macro is that they run at compile time. Rather than
writing JavaScript that gets bundled and executed in the browser, writing
JavaScript via babel-plugin-macros lets you run code at compile time. This means
that the code is executed on your computer when you bundle, not by your users
when they visit your website.
Most commonly these macros will either calculate some value (one that you can
and need at compilation time, not at runtime in the browser), or generate some
other code that runs in the browser.
As an example, once configured (we’ll get to that in a moment), you can use
preval.macro to easily evaluate
some code at compile time:
import preval from 'preval.macro'
const twoPlusTwo = preval`module.exports = 2 + 2`
This will be executed at compilation time, and the code that ships in your
bundle looks like this:
const twoPlusTwo = 4
But, why is this useful?
The example above is ultimately not that useful – I think we all trust browsers
to be able to add two and two at runtime. Today I came across a problem at work
that I solved with a macro which made my job much easier.
At Thread we sell clothes. Part of the site allows
users to explore our entire product listing by filtering it down to what they
are after. One of the things they can filter by is “sub category”: this is
specific types of clothes within a broader category. For example, for the
category “Shirts”, we have sub categories of “Plain shirts”, “Formal shirts”,
“Denim shirts”, and so on. The feature I’m working on adds an image to each of
these sub categories in the UI so that people who might not have heard of the
terminology can still recognise the category (before working in fashion I had no
idea what a “chambray” shirt was!).
One of the designers on the team sent me all the images, and there are a lot.
We have 50+ sub categories across all products and I had two choices for hooking
up each image to the sub category:
- Just use an image take and hard code the path:
const source = `/media/images/sub-categories/${subCategory.slug}`
- Manually create a map of
sub category slug => image URL
. This would mean
manually moving and importing 50+ images and hooking them into data from our
API. - Explore a solution that let me automatically load in the images and not have
Unsurprisingly, I picked option three, and the game was on!
Avoiding the basic solution
Just to add a bit of colour to why I avoided what on paper is the easiest
solution:
<img
src={`/media/images/sub-categories/${subCategory.slug}}`}
alt={subCategory.name}
/>
For us this approach has a major downside: we can no longer use Webpack and
ES2015 imports to manage all our assets. We have Webpack configured to take our
images and move them into the right place, and I didn’t want to have to special
case one folder of images just to make using them a little bit easier.
Setting up babel-plugin-macros
You might think that the macros need some complex setup but nope, it’s as easy
as:
yarn add babel-plugin-macros
- Add
'macros'
to your plugins list in your babel config.
And that’s it 👌.
Sub category slugs
Each sub category is an object with a few keys:
{
name: 'Denim shirts',
slug: 'denim-shirts',
id: 'abc123',
}
Thankfully I’d already discussed with our designer that we’d name the images
based on the slugs, so I knew that I had all the images mapped and ready. This
helped a lot and it’s something I’d recommend when working with a designer who
is creating a bunch of assets: chat ahead of time to figure out the best format
and naming scheme for sharing the results.
import-all.macro
The final piece of the puzzle is the
import-all.macro package. This
lets me generate a list of imports from a folder at compile time. For example:
import importAll from 'import-all.macro'
const a = importAll.sync('./files/*.js')
Gets turned into something like this at compile time:
import * as _filesAJs from './files/a.js'
import * as _filesBJs from './files/b.js'
const a = {
'./files/a.js': _filesAJs,
'./files/b.js': _filesBJs,
}
This is exactly what we want! We can use importAll
to create an object of all
the file paths and the image URLs – We have Webpack set up so that when we
import an image we get back the full path of where that image will be put during
build:
import image from './image.jpg'
// image => /media/images/image.jpg
Once I’d figured this out, I was ready to write some code 🎉.
Dealing with nested folders
To make the folder of images easier to work with we’d agreed to nest sub
categories under a folder of that category. This meant that I needed to do a bit
of data manipulation to get exactly what I wanted, because the file name
returned from import-all.macro
would have that extra folder in:
const images = importAll.sync('./category_images/**/*.png')
// images looks like:
{
'./category_images/shirts/denim-shirt.png': '/media/images/category_images/shirts/denim-shirt.png',
...
}
And what I wanted to end up with was a map where the key is purely the slug:
// this is what we want
{
'denim-shirt': '/media/images/category_images/shirts/denim-shirt.png',
...
}
This was a case of doing a bit of work on the object that import-all.macro
generates for us:
import importAll from 'import-all.macro'
const allCategoryImages = importAll.sync('./category_images/**/*.png')
const imagesMap = new Map(
Object.entries(allCategoryImages).map(([fileName, imageUrl]) => {
// image = "./category_images/accessories/bags.png"
// so split and pick out just the "bags.png" bit
const subCategory = fileName.split('/')[3]
// remove the extension and return [key, value] pair of [slug, imageURL]
return [subCategory.replace(/.png/, ''), imageUrl]
})
)
export default imagesMap
And with that, we’re done! Now in our React component we can fetch the image
from our Map:
const imageUrl = imagesMap.get(subCategory.slug)
As a bonus, we can also easily add some logging to alert us to if a sub category
is missing an image:
if (imageUrl.has(subCategory.slug) === false) {
logError('...')
}
Conclusion
The solution that babel-plugin-macros lets us create is elegant and easy to work
with. It will also automatically deal with new images and new sub categories and
it’s easy for non-engineers to update a sub category image without needing any
help from us – they can just dump the new image in the right place and
everything will update. For tasks like this in the future we will definitely be
reaching for it again and I recommend giving it a go next time you’re faced with
a much of manual lifting that feels very much like it could be automated!
Structuring React applications
One of the best features of React is that it doesn’t force much convention and
leaves a lot of decisions up to the developer. This is different from say,
EmberJS or Angular, which provide more out of the box for you, including
conventions on where and how different files and components should be named.
My personal preference is the React approach as I like the control, but there
are many benefits to the Angular approach too. This comes down to what you and
your team prefer to be working with.
Over the years I’ve been working with React I’ve tried many different ways of
structuring my applications. Some of these ideas turned out to be be better than
others, so in today’s post I am going to share all the things that have worked
well for me and hopefully they will help you too.
This is not written as the “one true way” to structure your apps: feel free to
take this and change it to suit you, or to disagree and stick to what you’re
working with. Different teams building different applications will want to do
things differently.
It’s important to note that if you loaded up the
Thread frontend, you would find places where all of
these rules are broken! Any “rules” in programming should be thought of as
guidelines – it’s hard to create blanket rules that always make sense, and you
should have the confidence to stray from the rules if you think it’s going to
improve the quality of what you’re working on.
So, without further ado, here’s all I have to say on structuring React
applications, in no particular order.
Don’t worry too much
This might seem like an odd point to get started on but I genuinely mean it when
I say that I think the biggest mistake people make is to stress too much about
this. This is especially true if you’re starting a new project: it’s impossible
to know the best structure as you create your first index.jsx
file. As it
grows you should naturally end up with some file structure which will probably
do the job just fine, and you can tweak it as pain points start to arise.
If you find yourself reading this post and thinking “but our app doesn’t do any
of these!” that’s not a problem! Each app is different, each team is
different, and you should work together to agree on a structure and approach
that makes sense and helps you be productive. Don’t worry about changing
immediately how others are doing it, or what blog posts like this say is most
effective. My tactic has always been to have my own set of rules, but read posts
on how others are doing it and crib bits from it that I think are a good idea.
This means over time you improve your own approach but without any big bang
changes or reworks 👌.
One folder per main component
The approach I’ve landed on with folders and components is that components
considered to be the “main” components of our system (such as a <Product>
component for an e-commerce site) are placed in one folder called components
:
- src/
- components/
- product/
- product.jsx
- product-price.jsx
- navigation/
- navigation.jsx
- checkout-flow/
- checkout-flow.jsx
Any small components that are only used by that component live within the same
directory. This approach has worked well because it adds some folder structure
but not so much that you end up with a bunch of ../../../
in your imports as
you navigate. It makes the hierarchy of components clear: any with a folder
named after them are big, large parts of the system, and any others within exist
primarily to split that large component into pieces that make it easier to
maintain and work with.
Whilst I do advocate for some folder structure, the most important thing is
that your files are well named. The folders are less important.
Nested folders for sub components if you prefer
One downside of the above is that you can often end up with a large folder for
one of these big components. Take <Product>
as an example: it will have CSS
files (more on those later), tests, many sub-components and probably other
assets like images, SVG icons, and more, all in the one folder.
I actually don’t mind that, and find that as long as the file is named well and
is discoverable (mostly via the fuzzy finder in my editor), the folder structure
is less important.
🔥 Hot take: Most people create way too many folders in their projects. Introducing 5 levels of nested folder structure makes things harder to find, not easier.
“Organizing” things doesn’t actually make your code better or make you more productive 👀
— Adam Wathan (@adamwathan) June 29, 2019
If you would like more structure though it’s easy to simply move the
sub-components into their own respective folders:
- src/
- components/
- product/
- product.jsx
- ...
- product-price/
- product-price.jsx
Tests alongside source code
Let’s start the points with an easy one: keep your test files next to your
source files. I’ll dive into more detail on how I like to structure all my
components so their code is next to each other, but I’ve found my preference on
tests is to name them identically to the source code, in the same folder, but
with a .test
suffix:
auth.js
auth.test.js
The main benefits of this approach are:
- it’s easy to find the test file, and easy at a glance to see if there are even
tests for the file you’re working on - all imports that you need are easier: no navigating out of a
__tests__
directory to import the code you want to test. It’s as easy as
import Auth from './auth'
.
If we ever have any test data that we use for our tests – mocking an API call,
for example – we’ll put it in the same folder too. It feels very productive to
have everything you could ever need available right in the same folder and to
not have to go hunting through a large folder structure to find that file you’re
sure exists but can’t quite remember the name of.
CSS Modules
I’m a big fan of CSS Modules
and we’ve found them great for writing modularised CSS in our components.
I’m also a big fan of styled-components,
but found at work with many contributors using actual CSS files has helped
people feel comfortable working with them.
As you might have guessed, our CSS files go alongside our React components, too,
in the same folder. It’s really easy to jump between the files and understand
exactly which class is doing what.
The broader point here is a running theme through this blog post: keep all your
component code close to each other. The days of having individual folders for
CSS, JS, icons, tests, are done: they made it harder to move between related
files for no apparent gain other than “organised code”. Co-locate the files that
interact the most and you’ll spend less time folder hopping and more time coding
👌.
We even built a
strict CSS Modules Webpack loader
to aid our developer workflow: it looks to see what classnames are defined and
sends a loud error to the console if you reference one that doesn’t exist.
Mostly one component per file
In my experience people stick far too rigidly to the rule that each file should
have only one React component defined within it. Whilst I subscribe to the idea
that you don’t want too large components in one file (just think how hard it
would be to name that file!), there’s nothing wrong with pulling a small
component out if it helps keep the code clear, and remains small enough that it
makes little sense to add the overhead of extra files.
For example, if I was building a <Product>
component, and needed a tiny bit of
logic for showing the price, I might pull that out:
const Price = ({ price, currency }) => (
<span>
{currency}
{formatPrice(price)}
</span>
)
const Product = props => {
// imagine lots of code here!
return (
<div>
<Price price={props.price} currency={props.currency} />
<div>loads more stuff...</div>
</div>
)
}
The nice thing about this is you don’t create another file and you keep that
component private to Product
. Nothing can possibly import Price
because we
don’t expose it. This means it’ll be really clear to you about when to take the
step of giving Price
its own file: when something else needs to import it!
Truly generic components get their own folder
One step we’ve taken recently at work is to introduce the idea of generic
components. These will eventually form our design system (which we hope to
publish online) but for now we’re starting small with components such as
<Button>
and <Logo>
. A component is “generic” if it’s not tied to any part
of the site, but is considered a building block of our UI.
These live within their own folder (src/components/generic
) and the idea
behind this is that it’s very easy to see all of the generic components we have
in one place. Over time as we grow we’ll add a styleguide (we are big fans of
react-styleguidist) to
make this even easier.
Make use of import aliasing
Whilst our relatively flat structure limits the amount of ../../
jumping in
our imports, it’s hard to avoid having any at all. We use the
babel-plugin-module-resolver
to define some handy aliases to make this easier.
You can also do this via Webpack, but by using a Babel plugin the same imports
can work in our tests, too.
We set this up with a couple of aliases:
{
components: './src/components',
'^generic/([\w_]+)': './src/components/generic/\1/\1',
}
The first is straight forward: it allows any component to be imported by
starting the import with components
. So rather than:
import Product from '../../components/product/product'
We can instead do:
import Product from 'components/product/product'
And it will find the same file. This is great for not having to worry about
folder structure.
That second alias is slightly more complex:
'^generic/([\w_]+)': './src/components/generic/\1/\1',
We’re using a regular expression here to say “match any import that starts with
generic
(the ^
ensures the import starts with “generic”), and capture what’s
after generic/
in a group. We then map that to
./src/components/generic/\1/\1
, where \1
is what we matched in the regex
group. So this turns:
import Button from 'generic/button'
Into:
import Button from 'src/components/generic/button/button'
Which will find us the JSX file of the generic button component. We do this
because it makes importing these components really easy, and protects us from if
we decide to change the file structure (which we might as we grow our design
system).
Be careful with aliases! A couple to help you with common imports are great,
but more and it’ll quickly start causing more confusion than the benefits it
brings.
A generic “lib” folder for utilities
I wish I could get back all the hours I spent trying to find the perfect
structure for all my non-component code. I split them up into utilities,
services, helpers, and a million more names that I can’t even remember. My
approach now is much more straightforward: just put them all in one “lib”
folder.
Long term, this folder might get so large that you want to add structure, but
that’s OK. It’s always easier to add extra structure than remove superfluous
structure.
Our lib
folder at Thread has about 100 files in it, split roughly 50/50
between tests and implementation. And it hasn’t once been hard to find the file
I’m looking for. With fuzzy file finders in most editors, I can just type
lib/name_of_thing
and I’ll find exactly what I want nearly every time.
We’ve also added an alias to make importing easier:
import formatPrice from 'lib/format_price'
.
Don’t be afraid of flat folders with lots of files in. It’s often all you need.
Hide 3rd party libraries behind your own API so they are easily swappable
I’m a big fan of Sentry and have used it many
times across the backend and the frontend to capture and get notified of
exceptions. It’s a great tool that has helped us become aware of bugs on the
site very quickly.
Whenever I implement a 3rd party library I’m thinking about how I can make it
easy to replace should we need to. Often we don’t need to – in the case of
Sentry we are very happy – but it’s good to think about how you would move away
from one service, or swap it for another, just in case.
The best approach for this is to provide your own API around the underlying
tool. I like to create a lib/error-reporting.js
module, which exposes an
reportError()
function. Under the hood this uses Sentry, but other than in
lib/error-reporting.js
, there is no direct import of the Sentry module. This
means that swapping Sentry for another tool is really easy – I change one file
in one place, and as long as I keep the public API the same, no other files need
know.
A module’s public API is all the functions it exposes, and their arguments.
This is also known as a module’s public interface.
Always use prop-types
(or TypeScript/Flow)
Whenever I’m programming I’m thinking about the three versions of myself:
- Past Jack, and the (questionable at times!) code he wrote
- Current Jack, and what code I’m writing right now
- Future Jack, and how I can write code now that makes his life as easy as
possible later on
This sounds a bit silly but I’ve found it a useful way to frame my thinking
around approaches: how is this going to feel in six months time when I come
back to it?
One easy way to make current and future versions of yourself more productive is
to document the prop-types that components use! This will save you time in the
form of typos, misremembering how a certain prop is used, or just completely
forgetting that you need to pass a certain prop. The
eslint-react/prop-types
rule
comes in handy for helping remind us, too.
Going one step further: try to be specific about your prop-types. It’s easy to
do this:
blogPost: PropTypes.object.isRequired
But far more helpful if you do this:
blogPost: PropTypes.shape({
id: PropTypes.number.isRequired,
title: PropTypes.string.isRequired,
// and so on
}).isRequired
The former will do the bare minimum of checks; the latter will give you much
more useful information if you miss one particular field in the object.
Don’t reach for libraries until you need them
This advice is more true now with the
release of React hooks than it ever has been
before. I’ve been working on a large rebuild of part of
Thread’s site and decided to be extra particular about
including 3rd party libraries. My hunch was that with hooks and some of my own
utilities I could get pretty far down the road before needing to consider
anything else, and (unusually! 😃) it turned out that my hunch was correct.
Kent has written about this in his post “Application State Management with React”
but you can get a long way these days with some hooks and React’s built in
context functionality.
There is certainly a time and a place for libraries like Redux; my advice here
isn’t to completely shun such solutions (and nor should you prioritise moving
away from it if you use it at the moment) but just to be considered when
introducing a new library and the benefits it provides.
Avoid event emitters
Event emitters are a design pattern I used to reach for often to allow for two
components to communicate with no direct link between them.
// in component one
emitter.send('user_add_to_cart')
// in component two
emitter.on('user_add_to_cart', () => {
// do something
})
My motivation for using them was that the components could be entirely decoupled
and talk purely over the emitter. Where this came back to bite me is in the
“decoupled” part. Although you may think these components are decoupled, I
would argue they are not, they just have a dependency that’s incredibly
implicit. It’s implicit specifically because of what I thought was the benefit
of this pattern: the components don’t know about each other.
It’s true that if this example was in Redux it would share some similarities:
the components still wouldn’t be talking directly to each other, but the
additional structure of a named action, along with the logic for what happens on
user_add_to_cart
living in the reducer, makes it easier to follow.
Additionally the Redux developer tools makes it easier to hunt down an action
and where it came from, so the extra structure of Redux here is a benefit.
After working on many large codebases that are full of event emitters, I’ve seen
the following things happen regularly:
- Code gets deleted and you have emitters sending events that are never
listened to. - Or, code gets deleted and you have listeners listening to events that are
never sent. - An event that someone thought wasn’t important gets deleted and a core bit of
functionality breaks.
All of these are bad because they lead to a lack of confidence in your code.
When developers are unsure if some code can be removed, it’s normally left in
place. This leads to you accumulating code that may or may not be needed.
These days I would look to solve this problem either using React context, or by
passing callback props around.
Make tests easy with domain specific utilities
We will end with a final tip of testing your components (PS:
I wrote a course on this!): build out a suite of
test helper functions that you can use to make testing your components easier.
For example, I once built an app where the user’s authentication status was
stored in a small piece of context that a lot of components needed. Rather than
do this in every test:
const wrapper = mount(
<UserAuth.Provider value=>
<ComponentUnderTest />
</UserAuth.Provider>
)
I created a small helper:
const wrapper = mountWithAuth(ComponentUnderTest, {
name: 'Jack',
userId: 1,
})
This has multiple benefits:
- each test is cleaned up and is very clear in what it’s doing: you can tell
quickly if the test deals with the logged in or logged out experience - if our auth implementation changes I can update
mountWithAuth
and all my
tests will continue to work: I’ve moved our authentication test logic into one
place.
Don’t be afraid to create a lot of these helpers in a test-utils.js
file that
you can rely upon to make testing easier.
In conclusion
In this post I’ve shared a bunch of tips from my experiences that will help your
codebase stay maintainable and more importantly enjoyable to work on as it
grows. Whilst every codebase has its rough edges and technical debt there are
techniques we can use to lessen the impact of it and avoid creating it in the
first place. As I said right at the start of this post, you should take these
tips and mould them to your own team, codebase, and preferences. We all have
different approaches and opinions when it comes to structuring and working on
large apps. I’d love to hear other tips you have: you can tweet me on
@Jack_Franklin, I’d love to chat.
Converting a JS library to TypeScript: Part 3
If you missed the prior videos, you can find them here:
Part 1 and
Part 2.
Today we’re implementing more of test-data-bot’s API in TypeScript and diving
into using 3rd party libraries, specifically the FakerJS library. We’ll see how
best to think about and model types by recognising a situation where our first
typed approach has failed to provide clarity, and see how time thinking about
remodelling types really pays off.
You can watch it on YouTube, either by
clicking here to view it directly
or using the embedded player below. I filmed the video in 1080p so it should be
crystal clear 👌.
Using Windows 10 and WSL for frontend web development
I’ve been an exclusively Mac developer ever since I bought a second hand MacBook
(remember the all white, plastic ones?). I absolutely loved it and as I got more
into software development and discovered the terminal it became hard for me to
see how I could go back to Windows.
When I started my first full time engineering role the company provided a
MacBook Pro and a Cinema Display. This was so exciting! Over the next few years
I was provided exclusively with MacBook Pros to work on (which I recognise is a
fortunate position to be in).
When Apple released the latest iteration of the MacBook Pro, with its touchbar
and keyboard woes, I did begin to wonder if Windows was going to end up being
something I’d have to try. Reviews online and from friends and colleagues who
had these MacBooks were not positive. About a year ago I was due a new laptop
and work and was given the newest MacBook Pro, at around the same time I was
starting to think about buying a laptop myself so I didn’t rely on my work
machine for personal projects. I’m also an Android phone user, so I’m not
invested into the Apple ecosystem as others which makes the potential swap to
Windows easier, I think.
The rest of this post is very much based on my opinions: none of this is a
recommendation on what you should do. We all have different preferences and
opinions on which hardware and software combination is best for us.
Sadly I’ve not found the experience of the MacBook Pro to live up to either its
“Pro” naming or its “Pro” price point. Whilst I think I’m in the minority of
people who actually don’t mind the butterfly keyboard I’ve found the software to
have some constant issues that I’ve struggled with. I’ve had the MacBook
completely shut down whilst running a workshop for 40 people because it told me
it was charging the battery despite not. I have to hard reset the machine when I
try to wake it from sleep at least once or twice a week in order to get anything
beyond a blank screen (the first time it did this I thought it had broken). I’ve
had regular issues with the HDMI dongle (and yes, I did pay full price for the
official Apple dongle 😢) and it not connecting properly to external screens. As
someone who does a reasonable amount of talking and teaching this has become a
real issue to the point where I considered taking a backup laptop because I
didn’t trust the MBP to work properly.
Windows and WSL
I’d been following the work on WSL (Windows Subsystem for Linux) for some time
and found it a very compelling prospect; being able to run a Linux distribution
from within Windows could be a great way to make Windows more feasible for the
development work I do. Coupled with the
VS Code WSL plugin,
which makes it seamless to run VS Code with files from that Linux subsystem, I
felt it could be a viable alternative.
Taking the plunge
So I decided, given my MBP frustrations, to go for it. I did some research into
machines and went for a Dell XPS, which are regularly given very high reviews
online. Some (non-engineering) colleagues at work have them and spoke highly of
the machine. It worked out at ~£1000 less than the MacBook Pro cost, which I
figured was a very good saving – but only if I could work effectively on the
machine.
Getting started with WSL
I didn’t really have a clue where to start with setting up the Windows machine.
I was fighting years of Mac muscle memory and took to Google to find posts to
point me in the right direction.
Dave Rupert’s post on webdev with Windows
was the best blog post I found and really helped explain some things and point
me in the right direction. However, that post was written in early 2018, and
somethings have changed which means the steps are simpler now. Dave mentions
needing to install Git on the Windows side so VS Code can find it, but with the
VS Code WSL plugin that’s not needed as it plugs into the git
that you have
installed on the Linux side. I also referred to the
official Windows WSL installation instructions,
using those to verify if a blog post was up to date or not.
The terminal
I’ve been a solid fan of iTerm2 for a long time and was struggling to find a
terminal on Windows that could get close to it. I tried a few before discovering
that the next big update to Windows will include a brand new terminal app! Even
better, you can download it now from the Windows store. The
Windows Terminal has provided me with
everything I need; it can easily be configured via JSON (so I can get my custom
font in there just fine) and you can configure it to automatically connect to
your Linux distribution when it starts up, saving the need to type ubuntu
everytime you fire up a command line prompt.
Seamless workflow
The new terminal, coupled with VS Code and the Remote plugin, gets me an
experience on Windows 10 that’s pretty much identical to my Mac workflow:
- Fire up a terminal.
- Navigate into the project directory.
- Run
code .
to load VS Code with that directory active. - Let the VS Code Remote plugin connect (this is normally quick so doesn’t
cause any delays). - Start coding!
Everything within VS Code works perfectly; if I pop open a terminal there it
will be in my Ubuntu WSL, I can use the Git UI without any fuss, and extensions
run just fine too. I’ve yet to hit any snags with this workflow.
The frustrations
The above might make it sound completely plain sailing but there have been
teething issues along the way that are worth considering if you’re thinking of
trying the swap to Windows:
- It’s a known problem that file reading/writing via WSL is much slower than it
should be. This is due to a limitation of how WSL works. The great news is
that WSL2 will fix this, but it’s not out yet (unless you run an “Insiders”
build of Windows 10 that is slightly less stable). In practice I don’t find
slow read/writes to be much of an issue but you can notice it, particularly if
you’re npm installing. - This is more on me than on Windows but having used OS X exclusively for so
long it’s taking some time to get used to Windows and its keyboard shortcuts.
It was definitely a few weeks before I felt comfortable and had found some 3rd
party apps that helped replicate some apps from OS X that I was missing. If
you take the plunge be prepared for a bit of frustration as you and your
muscle memory adapts. - I miss the Mac trackpad. The Dell one is perfectly good, but it’s not quite as
nice to use. That said the keyboard is so much nicer! so this one evens
itself out. - Because I’m using this laptop for side projects and mostly frontend work I
don’t hit upon any limitations of WSL but there are plenty of apps or
libraries that can cause issues when run within WSL. If you’re expecting WSL
to just work with everything I would taper your expectations slightly. That
said, WSL2 supposedly fixes a lot of this (I saw a video where someone runs
Docker via WSL2, which is quite cool!) so this might get better once WSL2 is
out.
In conclusion
I’ve been pleasantly surprised with my journey into Windows 10 so far and it’s
gone much better than expected! With WSL2 and further improvements to the
developer workflow on Windows I’m excited to see where we are in another 6-12
months time. It’s really exciting to see Microsoft shift and take this stuff
more seriously – and they are doing an excellent job!
Subscribe to MarketingSolution.
Receive web development discounts & web design tutorials.
Now! Lets GROW Together!