When a Clojurescript app depends on a regular JS library, such as React for example, then it is typical to:
There are different ways this can happen, for example:
:target :bundle
with a bundler such as webpackI am a fan of shadow-cljs and so would typically use the second option. What this actually does is :simple
optimizations on 3rd-party code, which means Google Closure code is going to read 3rd-party libs when the app is being built. Sometimes though, Closure cannot understand the 3rd party code, for example because it doesnt have support for Class fields. In this really interesting talk from Alex Davis, he says he is seeing more and more popular JS libraries that Closure can't handle. I've only had the issue once myself and thankfully was able to configure shadow to use a different file than the problem one.
So, what to do?
Sticking with shadow but using it with a different provider (e.g. webpack) is an option, but for browser apps there is another interesting option to consider: get pre-processed 3rd party libraries directly in the browser via a script
tag (e.g. from a CDN such as unpkg).
I used a version of this approach for my experiments that called the deja-fu library's rationale into question. There, I just had a couple of script tags for 3rd party libs, followed by a script tag getting the application code. This meant script tags were order-sensitive and would not scale well because transitive dependencies would not be fetched automatically. Still, for a simple app it worked fine.
Recently though, all browsers have got support for importmap, which is best explained by example:
<script type="importmap">
{
"imports": {
"react": "https://esm.sh/react@18.2.0",
"react-dom/client": "https://esm.sh/react-dom@18.2.0",
"@tanstack/react-router": "https://esm.sh/@tanstack/react-router",
"my-demo-app": "/cljs-importmap-demo/cljs-out/main.js"
}
}
</script>
<script type="module">
import start from "my-demo-app";
start();
</script>
The imports
map is a bit like package.json dependencies - it says what libraries are needed and details of how to get them - all must arrive as ES6 modules. Transitive dependencies are also retrieved. After the importmap, the script tag with type=module
says to interpret the code within as an ES6 module. Here, it just imports the application code and starts it.
The module my-demo-app
just contains the application code, not any 3rd party libraries. To generate the module from clojuresript code is just a matter of using Shadow-cljs documented options for that, for example:
{:target :esm
:js-options {:js-provider :import}
:modules {:main {:exports {'default 'com.widdindustries.demo-app.app/init}}}}
Here is an example app demonstrating this technique and here is the source code for that.
This is the first time I've tried using importmap. I failed to google any experience reports of anyone using it from Clojurescript, hence this post. Here are some pros and cons I am aware of so far:
Using importmap, 3rd party libraries can be cached. The application code will also be cached, but is likely to change at a faster rate than library versions get changed, so will be downloaded more frequently, but will be smaller than the tradition bundled version.
The importmap is specified in the html file, but will also need to be specified again for a page that loads tests for example. Also, it may be required to use a dev-time version of a library locally, but deploy with the optimized one. For example, React performance profiling tools only work with the dev-time React version. It is possible to conditionally create the importmap, for example, if on localhost, create one map, if deployed a different one.
The 3rd party libraries are being retrieved wholesale - ie no dead code elimination could happen here. Is significant dead code elimination a thing in JS-land these days though? I've heard of Rollup, but I haven't tried out how it would help trim down React and the like.
importmap is a relatively recent addition to browsers - so might not be suitable for some potential users.
Loading speed vs bundled apps, aka time-to-interactive (TTI)? I haven't measured anything yet. Please comment if you have experience of this.
Any more you'd add? Please use the link below to discuss.
]]>Firstly, here are my Clojurescript dev setup requirements:
Anything you'd add? If so please see link at the bottom for how to discuss
I should also say that I'm generally developing in multi-person teams building SPA's with considerable business complexity. However, the list is still the same even for my own open source or hobby projects, but in some situations not everything above is a must-have. For example, if you only have a small number of fast-running tests then the items above about running subsets of tests will not be so important.
I recently started to do some work on a project with around 800 Clojurescript (browser) tests - which in itself may not sound massive, but there are a number of slower DOM-clicking tests, so total test-run time above 20 minutes. It goes without saying that one would not want to run all the tests in one go - but this fact meant that builds and tests had been split apart and this had led to quite a bit of complexity in the build setup: Local dev was done with figwheel+webpack, configured with multiple extra mains, and shadow+karma in the CI environment.
With the existing build setup, on saving a file you could be waiting 10+ seconds to see the incremental build finish - ouch! To run a single test required knowing which figwheel 'extra-main' the test would be compiled into and loading the auto-test browser page for that, and then doing some other steps I'm not even going to get into... all in all, not ideal.
So... what to do? My preferred Clojurescript build and test setup for the past couple of years has been shadow plus kaocha-cljs2. Everyone knows shadow of course, but kaocha-cljs2 seems weirdly unstarred (< 20) and un-discussed on the interwebs. The combination of these two gives me the above wishlist of course - that's why I chose it. But how well would it scale to the new megaproject? How easy would it be to set up?
Possibly one reason kaocha-cljs2 seems under-appreciated is that by design there are more moving parts compared to other Clojurescript test setups I have used - for example one needs an extra server (Funnel) for 2-way communication between jvm and js environments.
However, setup couldn't have been easier - and that's because I have a little ready-made shadow+kaocha_cljs2 template that I use in all my projects and libraries. I've called this template tiado-cljs2 and if you want to try it on your project, you'll have it up and running in minutes - see the README for instructions.
In the way I set up Shadow on this project, there are 4 'watches' going at once, one for the main app, one for tests and a couple of others for some miscellaneous apps. Shadow incrementally compiles just what is needed so if changing a test file, just the test build kicks in. So compared to the old build, incremental compile is often around 5-10x faster.
When it comes to the tests, I have used kaocha suites to split the tests based on ns-patterns - which in CI can be run concurrently. Kaocha-cljs2 doesn't support 'ns-patterns' out of the box yet (as kaocha does for clojure tests) - but luckily kaocha supports user-defined hooks so adding it was not difficult.
With these mega test-suites, the default timeout was not enough. User-defined timeouts are not currently respected by kaocha-cljs2 - so a little monkey patching was needed.
As well as running individual suites, the holy grail of clojurescript testing is surely having an IDE hotkey to run the test under the caret. This is achieved with a simple macro invoking (kaocha.repl/run xxx)
- a macro was required (rather than a function) so that it can be invoked when either in a cljs or clj repl.
I could have had multiple test watches instead of one - each watching a subset of tests (this is a shadow feature). However the single one works well enough and makes life easier for developers as there is just a single place for tests to run.
So, whilst compilation and test-running times still seem a bit bigger compared to what I'm used to, the whole thing now feels far more manageable. There may be scope for modularization of the app - I don't know yet, but I'm much happier to investigate that and experiment with a solid, speedier build+test setup.
The fact that all this works as well as it does is thanks to the shadow and kaocha maintainers of course. Clojurescript would not be in such a good place without them!
]]>#inst
reader literal from Extensible Data Notation (hereafter referred to as 'edn'), how it behaves by default in Clojure(script) and when it might not be sufficient for representing date/time information.The majority of the content of this post comes from the Rationale section of time-literals, a Clojure(Script) library which provides tagged literals for java.time objects.
Support for edn and its Reader Literals were a headline addition in Clojure 1.4 and with that came built-in support for the #inst tag. The #inst
tag is a part of the edn spec, where it is defined as representing an instant in time, which means a point in time relative to UTC that is given to (at least) millisecond precision. The format of #inst
is RFC3339, which is like ISO8601 but slightly wider.
In Clojure(script), #inst
is read as a legacy platform Date
object by default, but as is made clear by the edn spec and by this talk from Rich Hickey the default implementation is just that: #inst
may be read to whatever internal representation is useful to a consuming program. For example a program running on the jvm could read #inst
tags to java.time.Instant (or java.time.OffsetDateTime if wanting to preserve the UTC offset information). It seems to me unfortunate that Clojure(script) provided defaults for #inst
because users may not realise it is 'just a default', but that's just my opinion. My guess is that Clojure is trying to be both simple and easy in this case.
Although edn readme doesn't say this explicitly, to avoid 'reinventing the wheel', when conveying data using edn format, built-in elements seem to me to be preferable to user defined elements. For example, if one wants to convey a map, {:a 1 :b 2}
is preferred to #foo/map "[[:a 1] [:b 2]]"
- unless of course one wanted to convey something additional about the map, ordering perhaps. Similarly, if conveying an instant in time
use #inst
.
There are two situations where reader literals are useful:
edn
data between processesAlthough they have many similarities and overlap, Clojure allows these cases to be considered separately and for good reason, as explained below:
There are many kinds of things relating to date and time that are not an instant in time
, so #inst
would not be an appropriate way to tag them. For example the month of a particular year such as 'January 1990' or a calendar date such as 'the first of June, 3030'. There are no built-in edn tags for these but tags can be provided in the user space, as they are by the the time-literals library.
Note that the default Clojure reader behaviour is to accept partially specified instants, such as #inst "2020"
(and read that to a Date with millisecond precision) - but this is specific to the Clojure implementation and not valid edn (ie not RFC3339).
Clojure provides two mechanisms for printing objects - abstract and concrete as this code printing the same object shows:
(let [h (java.util.HashMap.)]
{:abstract (pr-str h)
:concrete (binding [*print-dup* true]
(pr-str h))})
=> {:abstract "{}", :concrete "#=(java.util.HashMap. {})"}
The concrete representation is sometimes useful to know and also the string output can be passed back to the reader to recreate the same internal representation again, which is known as round-tripping
.
The default readers and printers of platform date objects don't allow round-tripping, the reason for which is unknown.
This is relevant to the two java.time types which logically correspond to #inst
(java.time.Instant and java.time.OffsetDateTime). The the time-literals library contains specific readers and printers for those objects so that they do round-trip.
When conveying these objects in edn format, they should be tagged as #inst
(as per above argument about preferring built-in elements). To do that with time-literals
, simply provide your own implementation of clojure.core/print-method
for Instant and/or OffsetDateTime. With *print-dup*
true, the concrete type will still be printed.
Consider this code from a Clojure namespace:
(ns foo.bar)
(def one-day #time/period "P1D")
(defn one-more-day [period]
(-> period (.plusDays 1)))
Now answer:
(one-more-day one-day)
work?Go back and have a look if required, I will reveal the answer in the next sentence
The answer to 1. is maybe, ie only if *data-readers*
contains a mapping for time/period
AND the reader function is already loaded in the process. Just having a mapping in data_readers.cljc is not enough. Add a side-effecting require for that reader function you say? No thanks.
The answer to 2. is again maybe. If the mapping for time/period
is set up AND the reader function returned a java.time.Period then it will work.
So tl;dr reader literals in code can be made to work but is not good practice IMHO. That goes for user-defined literals but also #inst
and #uuid
. Typing a few extra characters to call the actual constructor function directly is not so hard.
I don't mean I never have files with literals in them, but not in code I expect anybody else (incl. myself at a later date) to just be able to 'pick up and run'. If I'm flowing around in my own space then it's fine. If I get to 'crystalising stuff out' - e.g. to tests for CI, then I replace any literals.
Btw if you want to do a find/replace for #inst
in your source files then clojure.instant/read-instant-date
or cljs.reader/parse-timestamp
are probably the functions you need ;-)
(log/info "request-for-help" {"priority" "high"})
are available as slf4clj.A couple of years ago I made a survey of all the logging libraries one might use from Clojure. The majority of contenders (clojure.tools.logging foremost among them) I ruled out fairly early because they only log strings, not data.
To explain this limitation, in all logging frameworks you can format messages as json or something and get {level: "INFO", message "Commissioner Gordon called because Gotham city is under attack from the Joker"}
. Whilst useful, what is more queryable is to log arbitrary data (key value pairs) and have that data passed as-is to appenders, which might serialise to json, or write to a database. An equivalent message formatted as JSON might be {level: "INFO", message-type: "request for help", urgency: "high", caller: "Commissioner Gordon", foe: "Joker", target:"Gotham" }
. Assuming you're not still shelling into boxes and grepping log files, IMO structured log data is something you can't go back from.
The requirement to log data left two main contenders, MuLog and Log4j2. At the time I decided Log4j2 seemed like the boring, safe choice. Well, that didn't turn out to be quite right haha!
As a result of opting for Log4j2, I put some helper functions in a lib for Clojure users writing log statements against Log4j2. Note: the README for those contains more detailed comparison of existing Clojure logging libraries.
Released since I made that review, and created as a result of log4shell is Amperity's Dialog which is a logging backend for slf4j(1.x, string based) - the de facto Java logging facade. Dialog also provides a Logging API based on logging strings or suggests you use clojure.tools.logging (strings again).
Log4j2 is a 'logging implementation' or 'backend'. Ideally one would write log statements against a logging abstraction, where log statements get channeled to whatever logging backend is in place. This is especially important when writing library code. Users of e.g. Carmine will find logs coming via Timbre whether they like it or not.
Looking at the options here, assuming we want to log data ofc, MuLog might be a choice. It has be made to plug into an slf4j 1.x backend, so surely can be made to plug into other things.
The most obvious choice though is SLF4J, apparently the most popular Java library of any kind. The 1.x version of this has been around for a long time and as you'd expect from an API dating from the noughties it only logs strings. With the 2.0 release, that has changed.
Enter slf4j 2.0 - which was released toward the end of 2022. The 2.0 version of this popular logging facade newly includes an API for logging data, whilst remaining backwards compatible with the 1.x API.
The fluent API contains the addKeyValue(String key, Object value)
method for structured logging. It's up to implementations as to what to do with the structured data. They may merge it to the MDC for that message for example as Log4j2-slf4j bridge does. The MDC is a map of String->String though, which is a problem if the value happens to be anything other than a string.
It is straightforward to use from Clojure as-is, but there are some convenience macros released as a new library slf4clj. The aim is not to re-create the whole API, just offer some shorthand for the majority of use cases.
As you'd expect there are macros for debug
, info
, warn
etc and the args for each are deliberately the same as Clojure's ex-info, namely (<level> msg map)
or (<level> msg map cause)
.
Here is an example:
(require '[com.widdindustries.slf4clj.core :as log])
(log/info "request-for-help" {"urgency" "high", "caller" "Commissioner Gordon", "foe" "Joker", "target" "Gotham"})
If you're using clojure.tools.logging, you can keep your existing setup and just start writing slj4j 2.0 logging statements and that will likely 'just work' in that the data will get printed out in some string format according to your pattern config). Separately you can change your logging backend to something that does more with structured logs than just turning them into strings, like writing them as JSON for example.
If you're logging from a library it is possible to avoid having any logging dependency by using APIs included with the jvm.
java.util.logging (JUL) is the one you have most likely heard of. As you might guess though for something created in the early part of this century, it is a string-based logging API. This question on Stackoverflow goes into some details about JUL but in the threads is mention of a newer platform Logging facade called System.Logger - which might be interesting but doesn't seem to have gained traction AFAICT.
This is my first blog post since moving to Quickblog - a blogging tool powered by Clojure - thanks Borkdude!
]]>Why having clojurescript in the classpath may lead to unexpected behaviour
The clojurescript maven artifact lists compile dependencies which include: data.json, tools.reader and transit-clj and transit-java.
However the clojurescript jar itself is something like an uberjar: It includes compiled data.json, tools.reader and transit-clj and transit-java namespaces inside itself. That means that although it declares dependencies on those libraries, when you use Clojurescript yourself, those libraries' artifacts are not used at all (despite being on your disk and in the classpath).
The output of this command shows the dependencies I am referring to:
clj -Sdeps '{:deps {org.clojure/clojurescript {:mvn/version "1.11.4" } }}' -Stree
The effect is that if you want to use a different version of one of those libraries compared to the one Clojurescript was compiled with, you can't. Well, you can't with the regular clojurescript artifact, you can use the non-aot version: org.clojure/clojurescript$slim {:mvn/version “1.11.4”}
.
This was not an issue for a long time because those libraries didn't change. Now e.g. clojure.data.json has changed, hence why I hit the problem.
A handy technique I usually use to answer the question 'where on the classpath is namespace x.y.z getting loaded from?' is to call io/resource
on it. Doing that with data.json and Clojurescript in the classpath gives result as follows:
(clojure.java.io/resource "clojure/data/json.clj")
=> ".../.m2/repository/org/clojure/data.json/2.4.0/data.json-2.4.0.jar!/clojure/data/json.clj"]
which is the wrong answer! I still don't understand why io/resource shows the file there, whereas calling require
on that ns returns the one embedded in Clojurescript.
One might ask why I would be using Clojurescript and clojure.data.json together in the same jvm. Well, in my case, in development I tend to have my server and client dependencies combined, so I run cljs compile and server side stuff in one vm. When deploying and testing, I separate them (meaning Clojurescript jar is only on the classpath when cljs source is being compiled). It is possible to run separate server and cljs jvm's locally of course, but that then means I can't have a single .nrepl.edn file for example. There could be other reasons for using these 2 together though, writing data-reader functions that use json possibly.
I raised this on clojure slack and now Clojurescript's maintainers are aware, so hopefully this gets fixed.
A fix is likely to involve shading
. This is where a library wants to use a fixed version of another library, so it copies the sources of that library into itself, but changes the namespaces/packages of the source library to be something different, and specific to itself.
My thanks go to Alex Miller for explaining this.
Consider:
(doseq [x (range 1000000)])
Since range
returns a lazy sequence and doseq
does not retain the head of the sequence, there will only be one element of the sequence realized at every step of the doseq
.
Now let's split the creation and consumption of the lazy sequence over chained promises. I am using the promesa library, which on the jvm is a thin wrapper over java.util.concurrent#CompletableFuture.
(-> (p/resolved (range 1000000))
(p/then (fn [xs]
(doseq [x xs]))))
It looks like it should be just as lazy. Nowhere is the code above retaining a reference to the head of the sequence - and yet it is!
The then
promise internally has a reference to the preceding promise and that promise has a reference to its result - the head of the sequence. When the first promise returns, the sequence is unrealized, but as the subsequent then
promise consumes the sequence it is realized and the head retained by the preceding promise!
What happens if there is a longer chain of promises? A promise executing in a chain only has reference to the preceding one. The preceding one has lost its reference to the next one upstream of itself, so in a chain just the current and immediately preceding one are not gc-able.
So? Well imagine you are streaming results out of a db for example - that might be modelled as a lazy seq, which is consumed through e.g. doseq and written out to a stream. Sounds like a nice memory-friendly solution, but if the db request results in a promise it might seem natural to keep chaining that result on.
Does this apply to
I haven't investigated.
Related to this topic is Stuart Sierra's Lazy Effects post, in which he says never mix laziness and side-effects. Good advice I would say.
]]>For years now, the API has been alpha
, by which we mean "Ready to use with the caveat that the API might still undergo minor changes". With the current release, the API of tick has been split into
tick.core
namespace, which will have no breaking changes in future releasestick.alpha.interval
namespace, which contains just the functions pertaining to Allen's intervalcalculus. As its name suggests, this is still alpha. Apologies if the title of this post is misleading, I didn't want to stuff it with caveats.There are plans to revisit the interval functions and documentation and some changes to the API may arise from that work.
If you are upgrading from an earlier (0.4* version), here is what needs to change:
tick.alpha.api
requires with tick.core
+
or -
functions, note that these now only work where the arguments are allPeriods or all Durations. To move a time by an amount, use >>
or <<
instead, keeping the arguments the same. This change was made to make +/- analagous to clojure.core's +/-, where the arguments are commutative andassociative. It also means there is now just one way to 'move' a time in Tick.parse
function has now been removed from the API. Instead, choose the appropriate function for theformat of the string you need to parse, e.g. (t/date "2020-02-02")
. The parse function was slow by definition, as it tried to find an appropriate entity matching the string it was given. It also meant that iffor some reason the string you passed it was not in the format you expected, it might be parsed intoa different date-time entity than you expected, which is never going to be good.tick.alpha.interval
will address those.That's it.
Making breaking changes may be frowned on in the Clojure community, but they seem entirely reasonable to me in this case because:
alpha
warning.end-user
kind of library: I doubt any other libsdepend on it, so dependency hell is not an issue.One last thing, the current version is RC
, release candidate. IOW please kick the tyres and let us know of any problems. We have already been using it in production for a while, with no issues. After a period of a couple of months, we'll remove the RC label.
cljc.java-time (disclaimer: authored by myself) has the same API as java.time, but targets both Clojure and Clojurescript. It is implemented on top of a pure Javascript implementation of java.time
called JS-Joda.
It is also the underlying library for Juxt's Tick which provides a powerful date-time API beyond what java.time offers. In this blog post I am considering cljc.java-time instead of Tick, because I expect a larger proportion of readers will already have some familiarity with java.time's API.
Deja-fu is a new Clojurescript date/time library positioning itself as being for "applications where dealing with time is not enough of their core business to justify these large dependencies". The 'large dependendencies' being referred to there are those required by cljc.java-time
.
Deja-Fu's API offers a pure-cljs Time
entity and otherwise wraps the platform js/Date objects, via the goog.date API.
The long established Cljs-time is similar to Deja-fu in that it also wraps the goog.date API. I'm not measuring that here, but I would expect to see similar results.
Deja-fu appears to offer a trade off between a "good-enough date/time library, that's very lightweight" vs a "complete date/time library that's heavier". My feeling is that cljc.java-time is not meaningfully heavier and that light
date/time libraries are often heavy on developer time, bugs or both.
In my own experience of developing Clojurescript web applications with cljc.java-time, I haven't see any performance problems - I'm already using Clojurescript and React so there's already a significant build size, but given my users' devices and network connections (reasonably up to date, but nothing special) the applications seem to perform very well.
I once did a talk introducing cljc.java-time and related libraries, but only briefly talked about build size - should I have said it was only suitable if date/time was so core to the app that "large dependencies" could be justified ? FYI Build size is already discussed in the documentation.
Over the years since I released cljc.java-time I've come across (and generally ignored) the too large/heavy
PoV a couple of times, but Deja-fu's recent appearance has prompted me to put it head to head with cljc.java-time in an experiment.
For my experiment I have written two versions of a basic Clojurescript web application.
People are using Clojurescript in various places, including highly constrained environments like microcontrollers, but based on what I see the React webapp is what the majority are targeting and the use-case for which I would like people to have some more help when choosing a date/time API.
These apps have been deployed on the web so that tools such as PageSpeed can be used to test them. They are hosted on Firebase, but just because I already had a dummy project set up there. They don't use any Firebase APIs.
Version | TTI (mobile) | TTI (desktop) |
---|---|---|
Deja-fu version | 2.1s | 0.6s |
cljc.java-time version | 2.2s | 0.7s |
TTI (time-to-interactive) shown in the table above was taken from PageSpeed analysis. The cljc.java-time version is slower according to that analysis, but not in any meaningful way.
Consider that (the Javascript behind) cljc.java-time and React are fixed-size costs though. Being a small demo app, they are disproportionately big. If application code grows over time with features their relative size will reduce ofc.
The memory usage for both apps was roughly the same, as observed in a recent version of Chrome. Deja-fu is using js/Date objects, which have a single number field (representing an offset from the unix epoch). The cljc.java-time version is using LocalDate objects which have 3 numeric fields: year, month and day. Having the additional two fields could become significant if a large amount of date objects need to live in memory.
What about download size? Well, let's imagine that every time a user visits these apps, the Clojurescript code has been changed and released, so cannot be retrieved from cache and must be re-downloaded. It will only take 2-3 visits before the total amount of data downloaded across those visits is greater in the Deja-fu version. This is because in the cljc.java-time version, the underlying data/time lib is downloaded separately, and so is cacheable. This is very simple to set up and I would put it in the 'no brainer' category if data allowance is a significant issue for an app.
Maybe we could modularize the Deja-fu version so the library code can be cached over visits, but my main point here is YMMV.
Are there more metrics that we should look at here? Please suggest anything you think is significant.
Shown below is the code that is different between the two versions.
Two functions are required
tomorrow
returns tomorrow's date. This is needed so the date picker will only let userspick future datesinterval-calc
works out the number of days between 2 dates.The source code for these can be found here.
(ns time-lib-comparison.js-date
(:require [lambdaisland.deja-fu :as deja-fu]
[time-lib-comparison.app-main :as app]))
(def millis-per-day (* 1000 60 60 24))
(defn interval-calc [event-date]
(let [now (deja-fu/local-date)
event-date (deja-fu/parse-local-date event-date)
interval-millis (- (deja-fu/epoch-ms event-date)
(deja-fu/epoch-ms now))]
(/ interval-millis millis-per-day)))
(defn tomorrow []
(-> (deja-fu/local-date)
(update :days inc)))
(ns time-lib-comparison.java-time
(:require [cljc.java-time.local-date :as date]
[cljc.java-time.temporal.chrono-unit :as cu]
[time-lib-comparison.app-main :as app]))
(defn interval-calc [event-date]
(-> cu/days (cu/between (date/now) (date/parse event-date))))
(defn tomorrow []
(-> (date/now)
(date/plus-days 1)))
The libraries have been weighed up against each other performance-wise, but of course that is only one part of the story.
Did you notice any bugs in the Deja-fu version? Go back and look if you want, I will reveal the issues in the next sentence.
Firstly, the Deja-fu version is expecting that there are 24 hours in a day - and generally that's right, except when crossing a DST boundary. The other issue is in the tomorrow
function of the Deja-fu version. It returns a date, but not tomorrow's.
If you already know java.time, then it's not just different method signatures you'd have to deal with in using Deja-fu, but actual semantics. For example, if you 'add' a month to the 31st January, what happens? A decision had to be made by the API authors, and that decision was made differently.
I did think about putting bugs in the cljc.java-time version too but they looked like obvious typos, like (plusDays 2)
for tomorrow
.
I have chosen some requirements for the app where in the Deja-fu version I need to use the much maligned js/Date API. If you knew you'd be going down that road from the start, you might think twice about choosing a lightweight
date/time library, but generally we can't be sure what requirements will come our way.
Also, if the app needed to do custom parsing and formatting from/to strings, for the cljc.java-time version I'd need to bring in a JSJoda addon, which takes TTI up to 2.5 seconds on mobile, whereas with the Deja-fu version TTI would be the same.
Tempo is my work-in-progress attempt to make a date-time API with the common parts of java.time and the new platform API for Javascript called Temporal
(to be available in browsers sometime soon, possibly this year).
The fact that Temporal is a platform API is the big reason of course.
My feeling is there is sufficiently large overlap between Java and Javascript's platform date-time APIs to make a useful library that will suit cross-platform library authors needing some basic date/time functionality such as Malli and perhaps also as a basis for a 'lite' version of the Tick library.
Will cljc.java-time become irrelevant in a Tempo future? I don't think so because I think for many it will continue to be a solid, familiar choice that comes with minimal overhead and maximum stackoverflow-ability. Not everyone loves java.time but it's usually good enough.
The main take-away I would hope readers get from this is that that cljc.java-time should not be dismissed out of hand for some common use cases, just as we don't generally pick C over Clojure because an equivalent program might be less resource intensive.
There may well be Clojurescript applications where cljc.java-time would be inappropriate, but my feeling is for typical Clojurescript web applications it's not an issue.
I'd definitely be interested to hear about your opinions on this and Clojurescript build sizes in general. What are you shipping? How did you make decisions about what build size was acceptable (including the use of Clojurescript itself)?
Feel free to use this thread on Clojureverse to comment.
]]>interop
( accessing a Javascript object's methods and properties) and that's what I'm going to look into here.IMHO Clojurescript is somewhat lacking when it comes to official documentation, hence this blog post, and the need to quote from twitter:
In concrete terms, sounds like
— Mike Fikes (@mfikes) July 5, 2017
✓ (.-length "abc")
X (.-length #js {:length 3})
✓ (goog.object/get #js {:length 3} "length")
I'm going to look into the tradeoffs of what David and Mike say there.
Firstly, I'll try to make a clear distinction between JS data vs API: A data object is any object you could round-trip through JSON/stringify => JSON/parse. An object that may appear to be a data object because it only contains properties (ie no methods), may not be because those properties might be getters or setters (e.g. the length
property of String "foo" referred to in the twitter post is a getter
).
If an object came to your program via a call made to a JS library or API, it's most likely not a data object unless the library documentation explicitly says so.
Secondly, be aware that when doing advanced compilation with Clojurescript, the compiler will change all the variable and function names in your program to reduce overall build size. This works automatically for regular Clojurescript code, but when doing interop
extra configuration is sometimes required which takes the form of type hints in the code or externs files.
Ok, with those two points in mind, let's proceed.
I find the (.-length "foo")
item in Mike's list interesting. According to the advice, we would also choose (.-length #js[])
when using the API of Javascript arrays.
Consider the alternative:
(goog.object/get #js[] "length")
; => returns 0
This demonstrates it is possible to use goog.object
to access API properties. So, we have what appears to be just a stylistic choice between this and (.-length #js[])
.
Why choose either one?
Firstly, note that length
is a special-case property name: length
never needs type hinting, because the names of properties that are part of the in-built JS or DOM API objects are always specifically left alone by the Clojurescript compiler. That works whether we are referring to the length
property of a Javascript array, or the length
property of some JS object you defined yourself: however it appears, if the property name appears in the core JS API, it is left untouched.
So to be more general, let's consider a property name that does not appear in the standard JS API, xxx
:
So, IOW, working with the API of foo, we would have a choice of: (.-xxx foo)
vs (goog.object/get foo "xxx")
The goog.object one will survive advanced compilation (meaning it is compiled to foo.xxx
or equivalent), whereas the dot-access version will need a type hint (as in (.-xxx ^js foo)
) to avoid xxx
being renamed under advanced compilation.
So +1 for the goog.obj approach so far because less config is required.
However, in working with the API of some JS object, it's quite common to both access properties/getters and call methods:
(let [foo ^js (some-fn)
bar-prop (.-bar foo)]
(.methodFoo foo (inc bar-prop)))
We could have accessed bar
with goog.object/get
, but this code is being consistent in using only dot-access for the API of foo
. We could also have accessed methodFoo
with goog.object/get (and then invoked it), but that would look pretty ugly.
So, in sticking to David Nolen's advice dot access only for APIs
this code makes a clear statement that it is working with API of foo
, not a data object called foo
- regardless of whether it is methods, properties or both, that we need to use. This comes as the cost of having to remember to put type hints in.
If we forgot to type hint foo
in that example, the properties bar
and methodFoo
would be renamed and the code would fail at runtime. For this reason, if you use advanced compilation, you must test your code having advanced compiled it first. Doing advanced compilation is slow, do during development I generally avoid it, but continuous integration tests and beyond should use the same compilation level as production.
If Google Closure did become able to optimise regular JS code (ie code that is now foreign
), then code that is using strings to access JS APIs (goog.object/get
& equivalent libraries ) would then be broken.
I've come to think type hints aren't so bad, because now type hints are documented I think it's easy enough to understand you just need to add ^js
when you first see the js object in scope.
I prefer dotted access for API access because it is stylistically distinct from data access, which makes my code easier to understand. It also future-proofs it against improvements in the compiler. This comes at a cost of having to add type hints, but that cost is low and is mitigated by running tests under advanced compilation.
There are libraries that have been created for working with JS APIs such as js-interop. These aim to provide a trade-off between dotted access and goog.object/get in that type hints are not required, but aim to provide a nice, straighforward syntax. Personally I don't use these libraries because I think regular dotted access is good enough.
If working with JS data, ie not API, then an alternative to goog.object
is Cljs Bean
So, now that's all cleared up, which dot-access is preferred, a.b.c
or (.. a -b -c)
...?
Dot access in the style of a.b.c
does have an issue, which is a shame (until fixed) because to me that seems pretty tasteful.
Would you like to see official documentation on this topic? If so, raise an issue here
Thanks to Thomas Heller for providing the point about future-proofing.
Looking around the internet for pointers, I found some Clojurescript-Firebase wrapper libraries I am not too keen on, and no great demo apps either... so I created my own demo 'todo-list' app
The README there contains a small list of instructions that should get you up and running quickly and deploying the app to the internet in no time. Then you can spend many happy hours building from there.
Googling for 'Clojurescript Firebase' I found a couple of 'wrapper libraries', re-frame-firebase and cljs-firebase-client.
I've written before about wrappers - tl;dr approach with caution.
There are obvious issues with the wrappers mentioned above:
The cljs-firebase-client lib is billed as a library, but it looks very unfinished. It might have some snippets you could borrow, especially if you want to use Shadow.
The re-frame-firebase lib has an api to cover pretty much all of Firebase I think, so there's a lot of code. It also has some dependencies I'm not sure I want:
and of most concern is way state from Firebase (auth-state, database contents) is stored in the Re-frame 'app-db'. The authors note this is a potential bug and indeed it is. It's also not necessary, as I'll demonstrate below.
The Firebase docs are great and got me a non-cljs 'hello, world' very easily. Could I bring in Clojurescript without introducing much new complexity? Well, I've attempted that in the todo app - see what you think.
One thing you'll notice with Firebase is that you pick and choose the APIs you want to include. For example if you want to use a database, you get a choice of two different ones. This is nice and in my todo-list app, I use just 'auth' and 'realtime database', so that's all you'll see any code for.
The todo app requires users to authenticate with a Google login.
Firebase has a simple api to trigger this. The interesting bit from a Clojurescript point of view is listening for the user information once they have successfully authenticated:
(defn user-info []
(let [auth-state (r/atom nil) ]
(.onAuthStateChanged (auth)
(fn [user]
(reset! auth-state (user->data user))))
auth-state))
This function that returns a Reagent atom that will contain user information when it is available. We can call this function any time and use the result in a reactive context, such as in a Reagent component - It doesn't matter if the user has already authenticated or not at the time the function is invoked.
There's no need to store the user data in the app-db. Doing so would leave us with more state to manage, clean up and so on. There's no Re-frame involvement required, but we could tie this function into a re-frame subscription as I'll demonstrate next.
Pushing data to the database is pretty straightforward and well documented and you can imagine how those calls might be wrapped up as Re-Frame 'effects', as they have been in the 'todo' demo app.
Listening for data with Re-frame is a bit more interesting though. Similar to the atom that contained the auth data above, we can have a function to return a Reaction (same thing as reactive atom effectively) containing the current value at some path in the Firebase database:
(defn on-value [{:keys [path]}]
(let [ref ^js (db/fb-ref path)
val (r/atom nil)
callback (fn [x] (reset! val (->clj x)))]
(.on ref "value" callback)
(reagent.atom/make-reaction
(fn [] @val)
:on-dispose #(do (.off ref "value" callback)))))
The 'Reaction' object returned from this function will be updated with the current value whenever it changes - nice! Now, if we want to use that as part of a Re-frame subscription, we can call it from a Signal function
(rf/reg-sub ::foo
(fn this-is-the-signal-function [[_ args]]
{:bar (on-value args)})
(fn this-is-compute-function [{:keys [bar]}]
;... do some further transform with 'bar' value from db
))
If you haven't used signal functions before, it's well worth a read of the hefty docstring to understand them. The todo-list app demonstrates this in action.
So... we can read and write data, and the Re-frame 'app-db' is nowhere in sight. I haven't got anything against the app-db - but I don't want to stuff in there unnecessarily - because for any data in there, you have to understand what effects put it there, how it's lifecycle is managed and so on. I might write more about this in a later post.
Firebase stores the data in the cloud and keeps a local copy of that sync'ed in the browser's store in case the connection drops. Re-frame handles doing minimal computation work, de-duping subscriptions etc. So... let's just lean on all that awesome machinery!
What about data the user is editing? In the todo app, that is component-local state only when the user is actually changing it, and is sent off to Firebase via re-frame/dispatch
at the appropriate time.
In fact, in this little example you might see Re-Frame as overkill and just stick with Reagent.
One last point about the Firebase database - it's json data of course, so it's not going to work to create Clojure maps with the usual edn goodies like non-string keys, namespaced keywords & etc. A nice approach to stay in JS/JSON land is to use cljs-bean for your data, rather than native Clojure datastructures.
The todo-list README demonstrates doing a build with advanced compilation.
The build includes the :infer-externs
option, which uses the ^js
tag metadata to let the compiler know to leave the calls to the Firebase APIs as they are.
Keep these compiler debug opts handy if you do get any problems with the minified build (ie you see error messages in the browser console like 'x.y is not a function'):
(def debug-opts
{:pseudo-names true
:pretty-print true
:source-map true
})
Firebase seems pretty nice for a hobby project - and maybe for more serious apps too. Using it with Clojurescript and Re-Frame is straightforward and a natural fit. For example, Firebase onValue
lets you listen for the latest value in some part of the database and that is easily hooked into a Re-frame subscription so the view magically updates whenever the database does. Simples!
Some points of note:
^js
type hints are all that's required.Some in the Clojure community would say that if a Java or JS API is good, then it needn't be 'wrapped' in a library, because plain interop code is idiomatic and any wrapping library might:
Those concerns all seem reasonable, but consider this java.time wrapper. Every one of the above concerns is addressed: The Clojure API is identical to the underlying Java API (addresses first 3 points) and the underlying API is stable (last point).
Okaaaay ... but if it's so similar, why use it at all?
The library's main reason to exist is because it works cross-platform, a huge win on my current projects - but let's leave that aside for the moment and consider the jvm-only POV.
#(.bar %)
#(.isAfter %)
, you'll use a properly namespaced clojure function x.y/is-after
- much better!(local-date? x)
This could be used for spec definitions for example.
This is a very recent addition (0.1.12 and takes a bit of explaining. In java.time, you have an Instant
which is equivalent to a java.util.Date
in that it's instances represent the start of a nanosecond on the timeline
. Now the tricky thing about Instants is that the only field/data they contain is an offset from the UNIX Epoch (midnight, Jan 1st 1970, UTC). Instants know nothing about years, months, days, hours etc etc. In the lingo, they are not 'calendar-aware', so you cannot add a year to an Instant or ask for it's day of the week. BUT... the API kind of suggests that might be possible.
Firstly, let's see the output of Instant#toString
:
2013-05-30T23:38:23.085Z
Hold on, how and why is that printing years, months & etc? Well, an Instant can be converted into a calendar representation and that's what the toString()
implementation does. That auto-conversion is the exception rather than the rule though, if you try to format an Instant as 'yyyy-mm-dd' you'll see what I mean.
On a related note, try adding a year to an Instant:
(-> (Instant/now)
(.plus 1 ChronoUnit/YEARS))
It compiles ok, but at runtime you'll get a run-time exception: Unsupported Field : YearOfEra
.
So what's the problem here? People should just learn this basic fact about the java.time API before using it, right?
In practice, I see this issue about Instant and calendar-awareness come up a lot - via colleagues, github issues, stackoverflow & etc
My guess is that a lot of people only need to work with dates infrequently enough that they don't feel it's worth investing time going through the tutorials, or if they do, the nuances quickly fade through lack of practice.
Having seen this problem in the wild again and again for many years I have decided to take action!
If you now do some erroneous thing with Instant in cljc.java-time, like this:
(-> (cljc.java-time.instant/now)
(cljc.java-time.instant/plus 1 cljc.java-time.temporal.chrono-unit/years))
You get an exception with message:
Hi there! - It looks like you might be trying to do something with a java.time.Instant that would require it to be 'calendar-aware',
but since it isn't, it has no facility with working with years, months, days etc.
To get around that, consider converting the Instant to a ZonedDateTime first or for formatting/parsing specifically,
you might add a zone to your formatter. see https://stackoverflow.com/a/27483371/1700930.
You can disable these custom exceptions by setting -Dcljc.java-time.disable-helpful-exception-messages=true
That message alone should at least prevent a lot of github issues and questions I get .... let's see!
I am adding that message because I think it's unlikely I could get java.time messages changed, I don't even know how I would do that. I do know how to make this suggestion for the new date-time API being made for the world's most popular programming language though, raise an issue!. That improved error message may be my greatest contribution to humanity to date ;-)
These are traditional wrappers and definitely come with trade-offs. There is a section in the tick README, Should you use tick for date-time work? which tries to offer an objective guide on the choices available.
In my work we use date logic a lot, which I think means it's an environment where it's worth learning to use a wrapper because of the extra+improved API. That being said, I use a wrapper for 80% of regular date-arithmetic - for things I consider more obscure or esoteric, or where performance might suffer, I drop to cljc.java-time (which tick is using under the hood) - and in very rare cases - plain interop.
If you have any thoughts/questions, please let me know ;-)
]]>Original content from here onwards
If you are involved in the manufacture of computers in Europe then you're probably already aware that your friendly local 39th technical committee has it's own toy scripting language and that language's support for dates and times is somewhat lacking (good talk on that). Well, work is currently underway to change that situation and in this post I'm going to compare the new API, called Temporal
, to a similar effort from a few years back that was made for Java
that resulted in a platform API called java.time
.
At the time of writing Temporal
(sha) is 'Stage 2' meaning it's still a work in progress. The Temporal authors have created a survey for any feedback you might have.
There is already an open issue in the Temporal github to document comparison to other date-time libs and since that has been open for a year and a half already, I thought I'd make a start at least for comparison to one other lib, java.time
aka (threeten), available to the JS world as JSJoda.
I maintain a date-time library that targets both Javascript and the JVM (Java runtime), cljc.java-time.
It has the API of java.time
and on JS platforms uses JSJoda under the hood.
So the value proposition is that users only need to know one API for both platforms and that the same date-time logic can be written to target either or both.
Great though JSJoda is, it is not the platform API of Javascript of course and is by necessity a pretty chunky lot of JS code. If Temporal gets implemented on JS platforms then maybe JSJoda could be implemented on top of it, or my lib could drop JSJoda and use Temporal directly.
Note: *Both Temporal
and JSJoda
are included in this page, so you can open your browser's JS console and paste in all of the code snippets*.
tl;dr IMO Temporal and java.time are very similar overall: they have mostly the same set of entities and there is support for going to and from the majority of ISO8601 representations). Temporal is a smaller API overall, but of course the gaps could be filled by user libraries if desired.
I start by comparing the main entities and then go on to look at some specific use-cases that I think are interesting. IOW this not a full comparison of every entity and method, but if there's something important you think is missing, please mention it in the comments at the end.
Temporal has a subset of the entities of java.time (see table below), but the entities it does have are what I guess the authors consider to be the fundamental ones. Some names are different so in the discussion after, I'll only use the java.time
entity names for clarity.
java.time | Time Literal Example | Temporal |
---|---|---|
Instant | #time/instant "2018-07-25T07:10:05.861Z" | Absolute |
ZoneId | #time/zone "Europe/London" | TimeZone |
LocalDateTime | #time/date-time "2018-07-25T08:08:44.026" | DateTime |
LocalDate | #time/date "2039-01-01" | Date |
LocalTime | #time/time "08:12:13.366" | Time |
YearMonth | #time/year-month "3030-01" | YearMonth |
MonthDay | #time/month-day "12-25" | MonthDay |
Period | #time/period "P1D" | Duration |
Duration | #time/duration "PT1S" | Duration |
DateTimeFormatter | n/a | (NOT PRESENT) |
Clock | n/a | Temporal.now |
Month | #time/month "JUNE" | (NOT PRESENT) |
Year | #time/year "3030" | (NOT PRESENT) |
ZonedDateTime | #time/zoned-date-time "2018-07-25T08:09:11.227+01:00[Europe/London]" | (TBD) |
DayOfWeek | #time/day-of-week "TUESDAY" | (NOT PRESENT) |
OffsetDateTime | #time/offset-date-time "2018-07-25T08:11:54.453+01:00" | (TBD) |
OffsetTime | #time/offset-time "08:12:13.366+01:00" | (NOT PRESENT) |
//]: # | java.time | [Time Literal Example | Temporal | [//]: # |———————|—————|————-|
These entities represent a span of time which is not attached to a timeline.
java.time allows for negative spans, whereas Temporal does not yet - but looks like it will.
A java.time Duration instance stores time as an amount of seconds, for example 5.999999999 seconds.
A java.time Period instance stores amounts of years, months and days, for example -1 years, 20 months and 100 days
A Period of 1 day, is not equivalent to to a Duration of 86400 seconds (24 hours) of course, because 1 day
is not always 24 hours, due to things like DST & leap seconds.
Temporal has combined these two entities into one, called Duration. Java has an equivalent entity PeriodDuration in the official addon lib for java.time.
Here is an example of adding a non-24-hour day in java.time
z = JSJoda.ZoneId.of('Europe/Berlin')
zdt = JSJoda.LocalDateTime.parse('2019-03-31T00:00:00').atZone(z)
zdtPlusDay = zdt.plusDays(1)
JSJoda.Duration.between(zdt, zdtPlusDay).toString()
=> "PT23H" (means 23 hours)
The equivalent with Temporal operation is done with LocalDateTime, then converting the results to Instants
dt = Temporal.DateTime.from('2019-03-31T00:00:00')
dt2 = dt.plus({days: 1})
z = Temporal.TimeZone.from('Europe/Berlin')
z.getAbsoluteFor(dt2).difference(z.getAbsoluteFor(dt), {largestUnit: 'hours'}).toString()
=> "PT23H" (means 23 hours)
Temporal just uses numbers where java.time would use these entities. Like java.time though, numbering starts at 1.
Partly this may have been done because you don't get the same compile-time checks with JS, but also I would guess this makes Temporal more easily work with other calendar systems. I imagine when working with non-Gregorian calendars in java.time one could avoid using Month and DayOfWeek.
This entity represents a point on the timeline, in a place, an example being
JSJoda.ZonedDateTime.parse("2018-07-25T08:09:11.227+01:00[Pacific/Honolulu]")
Temporal can parse that same string to an Instant, but it loses the zone/offset info
Temporal.Absolute.from("2018-07-25T08:09:11.227+01:00[Pacific/Honolulu]")
In place of this, we can create objects containing a zone and either an Instant or a LocalDateTime.
There is a draft proposal for ZonedDateTime in Temporal
I've never had a use for OffsetTime, so let's skip over it.
OffsetDateTime functionality is contained within ZonedDateTime, so I'm not considering it separately.
I am going to look at how to achieve various use-cases with both APIs, choosing examples I think are interesting.
Instant is not aware of calendars (e.g. DST, months, leap seconds etc), it's just a straightforward amount of nanos since an arbitrary point in time. One of the major noob java.time question topics stems from not being aware of this. For example trying to print the day of the month from an Instant doesn't work unless you provide a zone, or trying to add a year to an Instant - that kind of thing.
Java.time refers to this topic as human vs machine time whereas Temporal refers to entities as being 'Calendar-aware' or not, which seems a more self-explanatory definition.
Going from calendar-aware to Instant (non-calendar-aware) can involve disambiguation. For example on a DST change, a wall-clock time can happen twice (the clocks 'go back') or not at all (the clocks 'go forward').
Here is an example of where we have a wall clock time that doesn't exist in a zone and are converting it to a ZonedDateTime. See how we input the hour as '2', but it comes out as '3':
z = JSJoda.ZoneId.of('Europe/Berlin')
JSJoda.LocalDateTime.parse('2019-03-31T02:45:00').atZone(z).toString()
=> "2019-03-31T03:45+02:00[Europe/Berlin]" (which is Instant "2019-03-31T01:45:00Z")
Temporal docs section on resolving ambiguity
Temporal has the same default behaviour as java.time, but you can choose other options:
tz = new Temporal.TimeZone('Europe/Berlin');
dt = new Temporal.DateTime(2019, 3, 31, 2, 45);
tz.getAbsoluteFor(dt, { disambiguation: 'earlier' }); // => 2019-03-31T00:45Z
tz.getAbsoluteFor(dt, { disambiguation: 'later' }); // => 2019-03-31T01:45Z
tz.getAbsoluteFor(dt, { disambiguation: 'compatible' }); // => 2019-03-31T01:45Z
tz.getAbsoluteFor(dt, { disambiguation: 'reject' }); // throws
A similar example would be finding out the wall clock time of when a day starts - it's not always midnight!
Here we find out that the day starts at 1 a.m.
z = JSJoda.ZoneId.of("America/Sao_Paulo")
JSJoda.LocalDate.parse('2015-10-18').atStartOfDay(z).toString()
=> "2015-10-18T01:00-02:00[America/Sao_Paulo]"
I'll leave it as an exercise for the reader to do the same in Temporal ;-)
Since Instant isn't aware of calendars, you can add or take away seconds, but not months or years.
What about days though? As I said, a day is not 24 hours, but wrt Instant, Temporal treats it like it is:
Temporal.now.absolute().plus({ months: 5 }); // fail - as expected
Temporal.now.absolute().plus({ days: 5 }); // no fail - days in this context means 24 hours
In fairness, so does java.time, so both APIs are consistent on this
JSJoda.Duration.ofDays(1).toString()
=> "PT24H"
Example of truncating a java.time Instant to whole hours
JSJoda.Instant.parse("2020-07-30T21:29:54.697Z").truncatedTo(JSJoda.ChronoUnit.HOURS).toString()
=> "2020-07-30T21:00:00Z"
Although Temporal has facilities for Rounding using the with
method, this just works on the fields a Temporal object has. Since Instant only has nanos field, it doesn't have a with
method, so to achieve the above we go via a calendar aware object:
z = Temporal.TimeZone.from('UTC')
dt = z.getDateTimeFor(Temporal.Absolute.from("2020-07-30T21:29:54.697Z"))
rounded = dt.with({minute: 0, second: 0, millisecond: 0, microsecond: 0, nanosecond: 0 })
z.getAbsoluteFor(rounded)
=> 2020-07-30T21:00Z
java.time has DateTimeFormatter, which is used for converting to and from any string representation.
Temporal doesn't have an API for parsing, although there is an issue .
For printing, Temporal provides some facililities via toString
- see here, and Intl.DateTimeFormat, as in this example
There is no direct equivalent of java.time temporal package in Temporal at present.
You can access the fields of entities though, for example:
Temporal.DateTime.from('2019-03-31T00:00:00').day
=> 31
Temporal and java.time are fundamentally similar. Temporal has a smaller API and possibly that results in something that's easier to learn, but results in more verbose code. Personally I value ease-of-learning much more.
Date-time logic is just like math: you can type stuff in and you'll get answers... but are they the right ones? Have you got a type system that will let you know that you got it wrong? (please let me know if you do!) But it's not like math because it's fundamentals are way more complex... so I just want to know one API and know it really well. Because of that I created a date-time library to target both Java and Javascript.
So, of course I would have been happy if proposal-temporal had just decided to copy java.time! Well, they haven't, but that's not a show-stopper for lib by any means. It's great that Temporal is happening at all and hopefully will make it's way to our browsers and other JS runtimes in the near future.
]]>
tick is a Clojure(Script) library for working with time. cljc.java-time is used by tick and provides a cross-platform version of the java.time api.
In the latest releases, these 'just work' on Shadow-cljs, for example:
echo '{:deps { tick {:mvn/version "0.4.25-alpha"} thheller/shadow-cljs {:mvn/version "2.9.8"} }}' > deps.edn
echo '{:deps {}}' > shadow-cljs.edn
echo '{}' > package.json
npm install shadow-cljs --save-dev && npx shadow-cljs browser-repl
In the repl:
cljs.user=> (require '[tick.alpha.api :as t])
nil
cljs.user=> (t/today)
#time/date "2020-05-31"
Why is this news? Well, previously Shadow and other npm-using Clojurescript users of Tick/cljc.java-time had to include an extra shim library and manually include the js-joda dependencies in their package.json. A desire to move to the latest (and now scope-named) '@js-joda/core' packages, a fix in the latest Clojurescript release and an increase in my understanding of how to package Clojurescript libraries has resulted in these improvements. Note, there are still a few need-to-knows for Clojurescript users, but not too many!
In other news cljc.java-time now works on a third platform, babashka!
]]>The Reagent project, one of the better known cljs projects that depends on a javascript lib, has abandoned Clojurescript's dependency mechanisms (foreign-libs or deps.cljs) entirely, making users bring their own React, see this issue for details.
Original content from here on :
If you are authoring a Clojurescript library that doesn't depend on any regular Javascript (JS) code, transitively or otherwise then things are pretty straightforward: maven-package your library and put it in Clojars - job done!
Still reading? OK, well things are not as straightforward when libs do depend on JS code. As of this writing there is no guide I know of that explains everything you'd need to know, so think of this as a first draft of such a guide. Ideally the content here could go on to be included in the Clojurescript Site, as a solution for open tickets such as this. Also note that what I describe here all works since the April 2020 release of Clojurescript - There were some changes in that release that significantly improve the situation regarding libraries.
The main consideration is how to package a library so that Clojurescript users can consume it, whatever their build setup. To that end, I have created this companion repo to demonstrate different Cljs build setups (Shadow, target-bundle, cljsjs, npm-deps) all consuming the same (npm-depending) library and all targeting the same thing: a browser build with advanced optimizations. The library used as an example is one of mine, called cljs.java-time. This uses code from a single, standalone npm library 'js-joda' - so about as straightforward an example as you can get.
In Clojure (JVM) land, consuming maven-packaged Java libraries is seamless and ubiquitous. If you find a Clojure library that looks useful and that library ultimately depends on one or more Java libraries, then installation will not an issue. The Java libraries will get pulled down along with all the Clojure libraries and the overall artifact size won't likely be a major concern, within reason.
With Clojurescript, it's a bit different. Having a low overall artifact sizes may be crucial to your users for one thing. For another, regular JS libs are stored in NPM, which Clojure dependency tools like tools.deps do not currently work with.
Clearly having Clojurescript libraries depending on plain JS is possible, and has been since the early days of Clojurescript via :foreign-libs
, but for good reasons the story doesn't end there. JS-using libraries (like Reagent for example) usually have something in the README to explain how to consume them - because it's not as straightforward as on the JVM.
A maven-packaged ecosystem exists that shadows a lot of stuff from npm, called Cljsjs, so why would anyone bother complicating their build with a second dependency/build tool?
IOW - if people are using more than one or two JS libraries in their Clojurescript build, then using npm will likely be their preferred solution. Shadow-Cljs is one popular way to set that up and the newly released :bundle
target in Clojurescript is an alternative.
We can't say what proportion of Cljs users are using npm in their build, but we can try to use some proxies to guess. The Clojure survey doesn't ask this specifically unfortunately, but what we can see is that there is still plenty of activity in the Cljsjs repo. It be good to see the graph over time of Clojars downloads of Cljsjs packages too... tbd. I would expect to see use of Cljsjs diminish over time, but my feeling is that it's not going to be abandoned any time soon.
It is an option to have your library not depend on any Cljsjs libraries and have instructions in your readme for non-npm users that they'll need to add the Cljsjs dependencies themselves. This might be ok if the underlying npm libs are few and are not likely to change much - otherwise upgrading will be somewhat painful for users.
Another consideration if your lib does depend on Cljsjs libraries will be users targeting Node. Unless they are using Shadow, :foreign-libs
will be picked up but they work a bit strangely because every 'require' results in the foreign lib being evaluated, so prepare for some confused users or take steps to mitigate it, as I did with js-joda.
In summary, you can decide if the (potential) users of your lib are likely to be the ones using npm already or not. If they might not use npm, and the Cljsjs dependency tree doesn't look too hairy, then perhaps you would decide to have Cljsjs dependencies for the time being.
Assuming you do want to package one or more Cljsjs libraries (whether your lib declares a dependencies on them or not), you need to look at packaging 'foreign-libs' that will contain all the code that would have come from npm (if they don't exist already in CLJSJS of course). What makes this code 'foreign' in Cljs parlance is that is not written in Google Closure style (Google Closure is a key tool the Clojurescript compiler uses under the hood). The npm code you want to consume may actually be amenable to Dead Code Elimination by Closure but let's ignore that for now.
Create a library that just packages one npm library and submit a PR to cljsjs. Cljsjs has a helpful wiki with guides and explainers.
That Cljsjs library will contain a deps.cljs
file that looks something like this:
{:foreign-libs
[{:file "cljsjs/js-joda-core/js-joda.inc.js",
:provides ["@js-joda/core"],
:global-exports {"@js-joda/core" JSJoda}}]]
:externs ["cljsjs/js-joda/common/js-joda.ext.js"]}
There are more opts you might need, see the full list here for more info.
Note though that I am packaging :externs
here. Users of your library should use compiler opt :infer-externs true
, but for the Cljs.java-time library, that is not sufficient to survive advanced compilation. You can use the library consumers test to see if you need hand-rolled externs for your library or not.
One important point for npm-compatibility is what you put in :provides
and they keys of :global-exports
- the name (in this example @js-joda/core
) must exactly match that of the npm package name.
With this setup, your library code ns can require the JS lib like so:
(ns my-cool-lib
(:require ["@js-joda/core" :as joda]))
Importantly, this require will work for both foreign-lib/Cljsjs users and npm users.
Now you have a mvn-packaged foreign-lib, your library pom.xml can depend on it as it would any other non-npm lib.
Package a deps.cljs
file with your lib with contents like this:
{:npm-deps {"@js-joda/core" "1.12.0"}
:externs ["cljsjs/js-joda/common/js-joda.ext.js"]}
This example is from cljs.java-time
again and in this case just lists a single npm library and the required npm version. The same externs file that was packaged with the Cljsjs library is included here as well because the npm-using users of your library will exclude the Cljsjs dependency to avoid getting the foreign-lib as well (Shadow users won't need to as it ignores foreign-libs) but they will still need the externs.
Note that :npm-deps
dependency in deps.cljs is not tied to Google Closure processing (it was in the past).
The best thing I can do to explain the possibilities here is to point you to the library consumers test. This actually demonstrates all of the possible ways your library could be consumed by users targeting browsers and examples of how to do so.
The test includes an example compiling with :npm-deps true
option but if that doesn't work, don't fret, it is not recommended
React wrappers aside, afaik there aren't that many Cljs libs depending on plain-JS libraries right now - in some cases that might be because it has been seen as complicated, but as this guide shows, it's not rocket science.
I would advocate a change to Clojurescript that introduces a new compiler opt :use-foreign-libs-from-deps?
that defaults to false
. That would mean the non-Shadow npm users didn't have to track down and exclude Cljsjs dependencies from their build and hopefully act as a clear statement that Cljsjs is no longer the recommended path.
The situation for library authors would of course be more straightforward if all Cljs users were using npm, but I guess a signifcant proportion don't. It would be cool if the Clojure survey could track that more precisely. Cuerdas is one such Cljsjs-depending library, and has clearly had some issues when they try to drop the Cljsjs dependency.
As I say this guide is correct to the best of my knowledge and is something I would have found really helpful when first creating a Clojurescript library. If you have any feedback, corrections etc, I'd love to know!
& Thanks to David Nolen for explaining some of the finer points to me!
]]>cljc.java-time
, time-literals
and tick
. Yes, you heard right, yet more date-time libraries! In the talk I argue that they provide novelty with respect to cross-platform development and improve the overall situation for Clojurescript date-time work. This post gives some updates on what has happened since and considers future developments for these libraries.It's always hard to know how much usage open source projects are getting, but these young libraries are definitely seeing some healthy activity in terms of Github issues, contributions and stars. As expected, tick has had the majority of activity, but I'm aware of people using the ancillary libs in isolation too.
For the rest of the post I'll talk about the todo list that I presented in the Clojure/North talk, how it has progressed and what's new on the list. I'd welcome PRs on any of these items but if you want any of them expedited by myself then please get in touch
Useful to any Clojurist working with java.time, either with or without a wrapper library. java.time entities are printed as tagged literals that can be pasted back in to the REPL, written to files, sent over the wire and so on.
Examples:
#time/date"1969-07-20"
#time/time"20:17"
There has been some interest in having Transit encodings using the literals. Support for the datafy/nav
is another possible feature, but although it has a similar theme of representing data, it isn't that closely related to time-literals
so might live in a separate repo.
This is a Clojure(Script) library that mirrors the java.time api through kebab-case-named vars.
(ns my.cljc
(:require [cljc.java-time.local-date :as ld])
;create a date
(def a-date (ld/parse "2019-01-01"))
;add some days
(ld/plus-days a-date 99)
As this library only aims to mirror java.time and is generated from it, the api was essentially 'done' on the first release. The type hinting has been improved since then, thanks to work which has now become Tortilla)
Having (clojure.spec) specs and generators for java.time is something I made a start on, but while spec itself is changing then any lib using it is going to be subject to the same flux. There has been some interest in having Malli use cljc.java-time for it's date and time logic, but IMO cljc.java-time brings too much baggage on the Clojurescript side - although read on to the next section for a possible way through for this and other libraries.
Whatever direction these specification efforts take, a set of predicate functions for the java.time entities is a generally useful thing and a recent contribution to cljc.java-time added instance predicates for all java.time entities, for example (date? x)
.
The other open issues are about listing discrepancies between the underlying time libs and wrapping the further flung corners of java.time.
This Clojurescript library is used by cljc.java-time
(see above). It exposes the npm library Js-Joda (a faithful and mostly complete JS implementation of java.time) and extends the Clojurescript equivalence, comparison and hashing protocols to its entitites.
Because js-joda existed, it was a relatively small step to make a cross-platform time library, but although it is an enabler it is also something of a hindrance for two main reasons:
1) The myriad ways Clojurescript users consume npm libraries
2) It's size (minimum of 43k, gzipped)
On point 1), making a Clojurescript library that uses an npm lib sometimes feels a bit like being in the wild west, but at the end of the day, whatever setup you have it's not too hard to get this library working and if you can consume foreign-libs, it 'just works' out of the box. Having a single command that will drop you into a node repl with tick is pretty cool, for example:
clj -Sdeps '{:deps {org.clojure/clojurescript {:mvn/version "1.10.597" } tick {:mvn/version "0.4.23-alpha"} }}' -m cljs.main -re node --repl
Regarding point 2), I would say that in many contexts (including client projects I am currently engaged in) the size is simply not an issue when trading off
In other contexts the size would be a problem though. I have briefly looked into reducing its size through minification & Dead Code Elimination and although I've made a little progress I don't think it's going to be possible to reduce it enough to bring it down hugely or to be on a par with goog.date
.
These two issues together mean that cljs.java-time
& related libs probably aren't going to be used by other Clojure(Script) libraries needing date/time functions.
I'd say the best looking solution on the horizon is the new platform date-time library being developed for Javascript aka tc-39/proposal-temporal (hereafter PT). If JS had a good date-time api built-in then it wouldn't be necessary to bring your own of course, so no need for Js-Joda and problems 1) and 2) go away, simples!
The PT api is currently being finalized and experimental work is underway to reimplement Moment.js with it. How far away it is from being included in the next version of Node, Chrome etc I really can't say. Even when it is there, the world's browsers won't get upgraded overnight so polyfills will be needed for some time in many cases.
That aside, what does PT look like? Unfortunately from a cross-platform lib developer's PoV it is not that similar to java.time. Some obvious differences are naming and the entities where methods reside. Also there is no ZonedDateTime and the Duration and Period classes have been combined. However, although it looks pretty different, my feeling is that the differences are quite superficial. If you wanted a cross platform Clojure api that used PT and java.time, I think extending tick's protocols to the PT entities would likely be a good starting point, so you might end up with a tick-lite
that doesn't use js-joda or any npm lib.
This is a Juxt library that I made cross-platform and am now helping to maintain. I'll try to briefly describe it as having the following features:
1) A single-ns, intuitive api over java.time
2) Batteries-included - it depends on and configures cljc.java-time
, time-literals
& additional js-joda locale & timezone libs
3) Power tools - Interval Algebra and Scheduling
Unsurprisingly tick
appears to have the most traction of all the libs (despite its alpha status) and it's been great to see a number of bug fixes and minor contributions from the community since April.
In the main api the things I'd most like to be addressed include:
>>/<<
vs +/-
and range
See the full list of issues for other open items.
It has been good to see a positive response to the talk and the libraries. In summary, there have been some minor bug fixes & tweaks since April, but I've outlined what I see as some potential features that could exist in future - if people want them! As I say, I'd welcome PRs on any of these items but if you want any of them expedited by myself then please get in touch.
day8.re-frame.test/run-test-sync
and faking server responses with synchronous promises. This is nice because they're easier to understand and debug. Sometimes though, async is necessary. The Clojurescript site describes how to do async tests using the cljs.test/async
macro and Re-frame has a nice run-test-async
wrapper around that.
If you are using cljs.test/async
you have to make sure your test code calls the done
function in every case, including on errors, timeout etc. So to remove some boilerplate, here's a new utility which provides a macro like cljs.test/async
, but which on timeout or uncaught failure (exception or failed promise), will fail the test, calling the done
function for you.
Here's how you'd use it:
(ns myns.ns
(:require [clojure.test :refer [is deftest]]
[widdindustries.timeout-test :refer [async-timeout async-timeout-at]]))
(deftest my-test
(async-timeout done
;; do some stuff that will call `done` when it succeeds.
;; lib expects any async body will result in a promise
))
If you have any feedback please comment or raise an issue.
Henry
]]>Clojure/North 2019
conference, ]]>An alternative for doing something similar is with-bindings, and that is slightly different in that the new bindings are only seen within the context of the current thread. If for example, you use with-bindings on a var and the value of the var is referenced when doing a clojure.core/map
operation say, then it is possible you won't see the temp binding, since map
is lazy and may be executed in a different thread.
Also, with-bindings
requires the rebound vars are dynamic.
So... with-redefs is generally a more powerful go-to tool than with-bindings?
No, with-bindings
is more generally used on vars that are planned to get rebound, whereas with-redefs
is intended when you want to change the way things normally work, for example when you're running a test and decide you don't really want to call an external system, and can use with-redefs to stub.
However, beware that with-redefs
is not foolproof. Consider rebinding a function foo
(with-redefs [a.b.c/foo my-temp-fn]
... body that calls foo at some point, possibly from another thread)
Will the body here always see your temp binding?
No
Why not?
Well, interleaved threads is one situation.
Let's say this code is called from two threads and this is the order of events:
Thread A executes the with-redefs
Thread B executes the with-redefs
Thread A executes the body and exits the with-redefs
block (thereby restoring the root binding of foo)
Thread B now executes the body and does not see the temp binding!!
The docs for with-redefs
say this is handy for tests, which implies that if you want to change a var in non-test code, then alter-var-root
(assuming it is done once, on startup) is the way to go.
However, there's nothing to stop you hitting an interleaving problem in tests unless you are running your tests serially and the with-redef wraps the test body. There isn't a with-redefs-visible-only-via-this-block
or equivalent.
Use with-redefs
cautiously. Remember this blog post when scratching your head about why some code is not seeing a temp binding.
The idea to 'Separate Decisions from Dependencies' is to only use pure functions for writing code that 'makes decisions' and wire those functions together as necessary with all the plumbing bits of your system (that'll get your data to and from databases/services/users etc). Rich Hickey would call such a separation a Simplification.
Maybe you think you are doing this already, but ask yourself:
Is all your business logic available cross-platform?
For Clojurists that means your business logic is (or could be) in .cljc files. For JS/Node developers that generally means all your business logic modules are free of IO.
Doing this should be zero-cost, but I rarely see it in the wild.
Many authors and speakers have expounded the virtues of Separating Decisions from Dependencies or Isolating Computation From State [1], typically citing reliability/testability/visibility as benefits. I am totally in agreement with them, but why go further and say Decisions code must be cross platform?
Thinking here of a typical modern web/mobile application, we have situations like:
In all these situations we need to have the same logic wherever the app runs.
Cross-Platform business logic is not typically available in stacks where the various app components (web ui, mobile app, server) are on disparate platforms. Clojure and Node are two stacks where I've personally used this idea to great effect, including on Mobile applications written on React Native.
Briefly stated, if you've not got cross-platform logic the problems you might have are:
The JVM and JS runtimes differ significantly, for the purposes of business logic that usually means:
[1] A couple of my favourites are Gary Bernhardt's Boundaries and Stuart Sierra's Thinking in Data.
[2] cljs-time only partially implements clj-time. Working in finance as I mostly do, dates are everywhere and the differences too often leak out
]]>