Hey folks, quick update on this - good news! I got blocking semantics working (using a service worker hack). Not ready to release the update yet - hopefully tomorrow or in a few days - but I wanted to get an update out.
Blocking semantics
future
@(future (println :blah) (+ 1 2))
;:blah
;=> 3
You can yield
as well:
@(future (println :blah) (yield 4) (+ 1 2))
;:blah
;=> 4
On the main thread, future
returns a promise:
(-> (future (println :blah) (+ 1 2))
(.then #(println :res %)))
;:blah
;:res 3
Where yield
is especially useful:
(-> (future (-> (js/fetch "http://api.open-notify.org/iss-now.json")
(.then #(.json %))
(.then #(yield (js->clj % :keywordize-keys true)))))
(.then #(println "ISS Position:" (:iss_position %))))
;ISS Position: {:latitude 46.5746, :longitude 3.4638}
;=> #object[in-mesh.core "e7e5a816-b530-4ddc-b389-d6b6713605af" {:status :pending, :val nil}]
Because it returns a promise on the main thread, you can use promesa
or use your usual promise tricks:
(-> (js/Promise.all
#js [(future 1)
(future 2)
(future 3)])
(.then #(println :values (vec %))))
;:values [1 2 3]
;=> #object[Promise [object Promise]]
Of course, in a webworker, you can just use synchronous blocking semantics, returning the actual values:
(let [a @(future 1)
b @(future 2)
c @(future 3)]
(println :values [a b c])
[a b c])
;:values [1 2 3]
;=> [1 2 3]
injest
: Auto-parallelizing transducification, now in CLJS
So, with that stuff in place, I was able to port the parallel thread operator =>>
from injest
to Clojurescript:
(defn flip [n]
(apply comp (take n (cycle [inc dec]))))
(->> (range 1000000)
(map (flip 100))
(filter odd?)
(map (flip 100))
(map inc)
(map (flip 100))
(apply +)
(println)
time)
;"Elapsed time: 11452.300000 msecs"
;=> 250000500000
That’s the vanilla ->>
operator. With any significant work, the auto-transducifying thread macro x>>
doesn’t win you much because you spend much more time on actual work thanyou do boxing:
(x>> (range 1000000)
(map (flip 100))
(filter odd?)
(map (flip 100))
(map inc)
(map (flip 100))
(apply +)
(println)
time)
;"Elapsed time: 10722.200000 msecs"
;=> 250000500000
This is where the auto-transducifying, auto-parallelizing thread macro comes in handy:
(=>> (range 1000000)
(map (flip 100))
(filter odd?)
(map (flip 100))
(map inc)
(map (flip 100))
(apply +)
(println)
time)
;"Elapsed time: 5615.500000 msecs"
;=> 250000500000
On the main thread, =>>
returns a promise. Here, we’re moving the println
into the .then
:
(-> (=>> (range 1000000)
(map (flip 100))
(filter odd?)
(map (flip 100))
(map inc)
(map (flip 100))
(apply +)
time)
(.then #(println :res %)))
;"Elapsed time: 5780.800000 msecs"
;:res 250000500000
;=> #object[in-mesh.core "516360f7-7593-4fba-bbc2-8c1a94cf4c6f" {:status :pending, :val nil}]
Coming soon
So that’s pretty fascinating. There’s still some polish I have to add around the API; and some resiliency around the worker pools; and some better error handling; and automatically transfer transferables
; and nail down simpler build configurations across all three main build systems (cljs built-in, figwheel and shadow); and port it to node and nbb/sci; and maybe one day a version of =>>
that can work completely async, without the service worker hack.
But all in all, I’m pretty satisfied with the result, doubling performance over non-parallel versions in the browser, even while we’re serializing everything across the workers. For some workloads, you can see 3 to 5 times the performance, but I’m not going to get into a shootout in this post - more later on the metrics. Interestingly, once we’re automatically transferring the transferables
, parallelizing work on Typed Arrays across worker pools with =>>
, I think we will see speedups on par with what we see on the JVM.
Anyway, more to come - should have another beta out soon. Happy hacking!