Steal Like an Artist

10 Things Nobody Told You About Being Creative

Steal Link an Artist: 10 Things Nobody Told You About Being Creative is a short quick read with lots of ideas about being creative and getting inspiration.

Author: Austin Kleon

Create or Hate

Make Things!

Create or Hate is a quick, short read but very inspiring. Get out there and create things!

Author: Dan Norris

Other books

Rake Package to Create Zipfile

Small Rakefile to package a WordPress plugin into a zip file that can be installed by uploading.

The Rake::PackageTask requires FileTasks that describe how to build the files, since we don’t really need to do anything, we just need to define what the files are.

require 'rake'
require 'rake/packagetask'

file 'README.txt'
file 'admin/**'
file 'includes/**'
file 'languages/**'
file 'public/**'
file "index.php"
file "LICENSE.txt"
file "README.txt"
file "remote-api.php"
file "uninstall.php""remote-api", :noversion) do |p|
  p.need_zip = true

Then to create the zip, just run rake package. It’ll create the file in the pkg/remote-api directory, or whatever name you gave the package.

Clojure Duct setup with MongoDB, Foundation, and Buddy - Part 2

Zurb Foundation && SASS

Add Zurb Foundation or whatever webjars you want to use.

 [org.webjars/foundation "6.2.0"]
 [org.webjars/font-awesome "4.6.2"]

Setup a SASS file

I put these in src/sass/. You can import from webjars like this.

@import 'foundation/scss/foundation';
@import 'foundation/scss/util/mixins';
@import 'font-awesome/scss/font-awesome';

Watching SASS files in dev

Need to add SASS support. Checked out sass4clj. There is a lein plugin, but that didn’t play well with figwheel. I did end up using some ideas from that and the sass4clj project to integrate with figwheel.

I started with this SASS Watcher which was a good starting point, but didn’t load in the webjars. So the next step is replace with sass4clj which does reference webjars.

in dev.clj require these

[sass4clj.core :refer [sass-compile-to-file]]
[watchtower.core :refer :all]

Next create a new component to watch the SASS files and recompile on changes. A lot of these came from the lein-sass4clj project.

(defn- main-file? [file]
  (and (or (.endsWith (.getName file) ".scss")
           (.endsWith (.getName file) ".sass") )
       (not (.startsWith (.getName file) "_"))))

(defn- find-main-files [source-paths]
  (mapcat (fn [source-path]
            (let [file (io/file source-path)]
              (->> (file-seq file)
                   (filter main-file?)
                   (map (fn [x] [(.getPath x) (.toString (.relativize (.toURI file) (.toURI x)))])))))

(defn watch-sass
  [input-dir output-dir options]
  (prn (format "Watching: %s -> %s" input-dir output-dir))
  (let [source-paths (vec (find-main-files [input-dir]))
        (fn compile-sass [& _]
          (doseq [[path relative-path] source-paths
                  :let [output-rel-path (clojure.string/replace relative-path #"\.(sass|scss)$" ".css")
                        output-path     (.getPath (io/file output-dir output-rel-path))]]
            (println (format "Compiling {sass}... %s -> %s" relative-path output-rel-path))
              (-> options
                  (update-in [:output-style] (fn [x] (if x (keyword x))))
                  (update-in [:verbosity] (fn [x] (or x 1)))))))
       (watchtower.core/rate 100)
       (watchtower.core/file-filter watchtower.core/ignore-dotfiles)
       (watchtower.core/file-filter (watchtower.core/extensions :scss :sass))
       (watchtower.core/on-change sass-fn))

(defrecord SassWatcher [input-dir output-dir options]
  (start [this]
    (prn "Starting SassWatcher Component.")
    (if (not (:sass-watcher-process this))
        (println "Figwheel: Starting SASS watch process:" input-dir output-dir)
        (assoc this :sass-watcher-process (watch-sass input-dir output-dir options))
  (stop [this]
    (when-let [process (:sass-watcher-process this)]
      (println "Figwheel: Stopping SASS watch process")
      (future-cancel process))

Next setup a config for compilation and add the compoent to the dev system.

(def sass-config
  {:input-dir "src/sass" ;location of the sass/scss files
   :output-dir "resources/nspkt/ui/public/css"
   {:source-map true
    ;:output-style :nested, :compact, :expanded and :compressed
    ;:verbosity 1, 2

(defn new-system []
  (into (system/new-system config)
        {:figwheel (figwheel/server (:figwheel config))
         :sass (map->SassWatcher sass-config)}

Now we’re watching the files and updating on change. I think figwheel should pick up those changes and push an reload, but something doesn’t seem to be working there.

Clojure Duct setup with MongoDB, Foundation, and Buddy

Setup a new duct site

Why Duct? Well it’s a great starting point using most of what I want. Compojure, Ring, Component, ClojureScript, 12 Factor methodology.

lein new duct nspkt.ui +cljs +example +heroku +site
cd nspkt.ui && lein setup

Ok, we need some other stuff, like MongoDB, a css framework (Foundation), and authentication.

MongoDB Setup

I like Monger, so let’s add that. In project.clj, add the dependency.

[com.novemberain/monger "3.0.2"]

We need to add a connection string to the env for monger. This took a minute to figure out.

In the project.clj file, under :profiles > :project/dev > :env, add :url. This will write the values to .lein-env.

{:port "3000", :url "mongodb://localhost:27017/nspkt"}

Then we need to update the config.clj to grap the value. Like so.

(def environ
   :http {:port (some-> env :port Integer.)}
   :db {:url (some-> env :url)}

And add a compontent for System.

(ns nspkt.ui.component.mongodb
  (:require [com.stuartsierra.component :as component]
            [monger.core :as mg]

(defrecord MongoDb [url]
  (start [this]
    (let [{:keys [conn db]} (mg/connect-via-uri (:url this))]
      (assoc this :conn conn :db db)

  (stop [this]
    (if-let [conn (:conn this)]
        (mg/disconnect conn)
        (dissoc this :conn :db)

(defn db-component [options]
  (map->MongoDb options)

Next add the component to the system, add have the example endpoint depend on it. Don’t forget to add to the :requires. In system.clj.

(-> (component/system-map
     :app  (handler-component (:app config))
     :http (jetty-server (:http config))
     :db   (db-component (:db config))
     :example (endpoint-component example-endpoint))
     {:http [:app]
      :app  [:example]
      :example [:db]}))

And using the component in the example endpoint, endpoint/example.clj.

(ns nspkt.ui.endpoint.example
  (:require [compojure.core :refer :all]
            [monger.collection :as mc]
            [ :as io]))

(defn example-endpoint [{:keys [db] :as config}]
  (context "/example" []
    (GET "/" []
        (mc/find-maps (-> db :db) "reports")

Great! Let’s make sure everything is working. We need to lein deps and start the REPL again.

If you run into trouble it’s some times easier to see what’s going by lein runing.

I added a test record in MongoDB just to see everything works.

It’s not pretty, but it’s pulling stuff out of the DB! Now let’s add a CSS framework to help thing look a little better.

Migrate Google Sites to Jekyll



google-sites-backup/ gdata-python-client/ google-sites-backup/

Convert to Markdown

Install reverse_markdown

cd into the exported proj

find . -iname "*.html" -exec echo "tidy -q -omit -b -i -c {} | reverse_markdown > {}.md" \; | sed s/\.html\.md/\.md/ >
chmod +x
find . -iname "*.html" | xargs rm


Add frontmatter

find . -iname "*.md" -exec perl -0777 -i -pe 's/<head>.*<\/head>//igs' {} \;
find . -iname "*.md" -exec perl -0777 -i -pe 's/^# (.*)$/---\nlayout: page\ntitle: $1\n---/m' {} \;

Clean up left over extras, spaces, extra header lines, and

find . -iname "*.md" | xargs -t -I {} sed -i'' 's/Â//g' {}
find . -iname "*.md" -exec perl -0777 -i -pe 's/^[\s\|]*$//gm' {} \;
find . -iname "*.md" -exec perl -0777 -i -pe 's/^.*?---/---/ms' {} \;
find . -iname "*.md" -exec perl -i -pe 's/^ ([^ ].*)$/$1/g' {} \;

Remove absolute links

ack --ignore-dir=_site -l "\/a\/\/wiki" | xargs perl -i -pe "s/https:\/\/sites\.google\.com\/a\/roximity\.com\/wiki//g"

Fix resource links

ack --ignore-dir=_site -l "\/_\/rsrc\/\d*\/" | xargs perl -i -pe "s/\/_\/rsrc\/\d*\///g"

Rename %20 to underscores in file names.

for i in `find . -name "*%20*"`; do mv -v $i `echo $i | sed 's/%20/_/g'` ; done

Still had to do a fair amount of clean up from the converted markdown.


These make the stucture and navigation match the google sites somewhat.

Lots of our page had files as downloads. I like the idea of putting downloads in a sub directory and having them auto populate on the page. Also some of our navigation is based on pages in a matching directory. This plugin populates a sub_pages collection and a downloads collection. The view renders those collections

module AssociateRelatedPages
  class Generator < Jekyll::Generator
    def generate(site)
      page_lookup = site.pages.reduce({}) { |lookup, page| lookup["/" + page.path] = page; lookup; }

      site.pages.each do |page|
        subdir = File.join(site.source, page.dir, page.basename)
        if File.exist?(subdir) and
          entries = Dir.entries(subdir)

["sub_pages"] ={ |e|
            e =~ /\.md$/
          }.map{ |e|
            page_lookup[File.join(page.dir, page.basename, e)]

["downloads"] = entries.reject{ |e|
            e == "." || e == ".." || e =~ /\.md$/ || 
    , e))
          }.map{ |e|
            download = File.join(subdir, e)
            stat =
              "title" => e,
              "url" => File.join(page.basename, e),
              "size" => stat.size
{% if page.sub_pages.size > 0 %}
  {% for page in page.sub_pages %}
      <a href="{{ page.url | prepend: site.baseurl }}">{{ page.title }}</a>
  {% endfor %}
{% endif %}
{% if page.downloads.size > 0 %}
  <div class="post-downloads">
    {% for download in page.downloads %}
        <a href="{{ download.url | prepend: site.baseurl }}">{{ download.title }} ({{ download.size }}b)</a>
    {% endfor %}
{% endif %}

The navigation on the google site was mostly based on sub directories. This creates a nav collection used to build the navigation.

module HierarchicalNavigation
  class Generator < Jekyll::Generator
    #{dev: { page: Page, sub: [] }}

    def generate(site)
      nav = {}
      site.pages.sort_by(&:dir).each do |page|
        dirs = page.dir.split('/')
        dir = dirs[1] || ''

        if dirs.count <= 2
          if page.basename == 'index'
            nav[dir] ||= {'page' => nil, 'sub' => []}
            nav[dir]['page'] = page
            nav[dir] ||= {'page' => nil, 'sub' => []}
            nav[dir]['sub'] << page
      end['nav'] = nav.values
{% for nav in['nav'] %}
  {% if %}
  <li class="{% if page.url contains %}active{% endif %}">
    <a class="page-link" href="{{ | prepend: site.baseurl }}">{{ }}</a>
    {% if page.url contains %}
      {% for sub in nav.sub %}
        {% if sub.title %}
          {% capture sub_dir %}{{ sub.url | remove: ".html" | append: "/" }}{% endcapture %}
          <li class="{% if page.url contains sub.url or page.dir ==  sub_dir %}active{% endif %}">
            <a class="page-link" href="{{ sub.url | prepend: site.baseurl }}">{{ sub.title }}</a>
        {% endif %}
      {% endfor %}
    {% endif %}
  {% endif %}
{% endfor %}

Spark RDD to CSV with headers

We have some Spark jobs that we want the results stored as a CSV with headers so they can be directly used. Saving the data as CSV is pretty straight forward, just map the values into CSV lines.

The trouble starts when you want that data in one file. FileUtil.copyMerge is the key for that. It takes all the files in a directly, like those output by saveAsTextFile and merges them into one file.

Great, now we just need a header line. My first attempt was to union an RDD w/ the header and the output RDD. This works sometimes, if you get lucky. Since union just smashes everything together, more often then not, the CSV has the header row somewhere in the middle of the results.

No problem! I’ll just prepend the header after the copyMerge. Nope, generally Hadoop is write only, you can get append to work, but still not a great option.

The solution was to write the header as a file BEFORE the copyMerge using a name that puts it first in the resulting CSV! Here’s what we ended up using:

(ns roximity.spark.output
  (:require [sparkling.conf :as conf]
            [sparkling.core :as spark]
            [sparkling.destructuring :as de]
            [ :as csv]
            [ :as io]
  (:import [org.apache.hadoop.fs FileUtil FileSystem Path]

(defn- csv-row
  (let [writer (]
    ( writer [values])
    (clojure.string/trimr (.toString writer))

(defn save-csv
  "Convert to CSV and save at URL.csv. URL should be a directory.
   Headers should be a vector of keywords that match the map in a tuple value.
   and should be in the order you want the data writen out in."
  [url headers sc rdd]
  (let [
      header (str (csv-row (map name headers)) "\n")
      file url
      dest (str file ".csv")
      conf (org.apache.hadoop.conf.Configuration.)
      srcFs (FileSystem/get ( file) conf)
    (FileUtil/fullyDelete (io/as-file file))
    (FileUtil/fullyDelete (io/as-file dest))
    (->> rdd
      (spark/map (de/value-fn (fn [value]
        (let [values (map value headers)]
          (csv-row values)
      (spark/coalesce 1 true)
      (#(.saveAsTextFile % file))
    (with-open [out-file (io/writer (.create srcFs (Path. (str file "/_header"))))]
      (.write out-file header)
    (FileUtil/copyMerge srcFs (Path. file) srcFs (Path. dest) true conf nil)
    (.close srcFs)

This works for local files and s3, and it should work for HDFS. Since we’re using s3 and the results are not huge, we use (coalesce 1 true) so that only one part file is written to s3, without that we had issues with too many requests. Could probably use a higher number and find a happy medium, but we just use 1.

Brutalist Framework