A stupid simple Go-based DNS testing server

View project on GitHub

I’ve been working with DNS a lot recentlyMainly in Boulder, the backend behind the free CA Let’s Encrypt. and in doing so have had the fun task of setting up and tearing down a whole range of DNS servers in the quest to find one that would actually make writing and testing DNS clients and resolvers even somewhat easy and pain-free.

The first two servers that I’m sure jump to everyone’s minds are BIND and Unbound. If you’ve ever had to do any kind of DNS administration you can probably get up and running with one of these servers relatively quickly. But using them in a programmatic fashion for testing where you may want to quickly change the contents of zones, add zones, delete zones, or quickly setup/teardown these setups on various different machines becomes quite annoying to handle almost immediately.

Figure 1: Me after getting BIND setup for Travis tests

Instead of spending a couple of hours trying to setup some convoluted harnessDon’t even get me started on nsupdate for BIND or Unbound using TSIG + DDNS updates or something to simplify various test procedures I instead turned to github.com/miekg/dns, a Go DNS library that I’ve used a number of times beforeBoulder heavily uses this library, including for possibly one of the most useless DNS servers ever written, outside of our specific test case of course!. miekg/dns is the basis for Cloudflares various DNS server/parser implementations and has proven to be quite resilient parser and, bonus, it also comes with a great basic server implementation that follows Go’s general server/handler interface.

It was on top of this server implementation that I decided to build my testing server, dns-workbench. This server didn’t really have to be extremely complex or, entirely, RFC-compliant, my only real requirements were

  • It should only pretend to be a authoritative server, nothing else fancy DNS-wise
  • It should support a simple way to define zones and records, and should support as many record types as possible (thanks to dns/miekg the second part it taken care of for us)
  • Quick and clean setup, all non-zone related configuration should be provided on the command line, the only file required should be a definition file
  • A simple method to gracefully switch out the zones being served without having to restart the server each time

The combination of Go and miekg/dns makes these tasks pretty easyTaking just over 300 lines in version 0.0.1. Now we’ve got that out of the way lets get on to how dns-workbench actually works!

Zone definitions

The zone definition file is written in YAML (and read using gopkg.in/yaml.v2), in order to be somewhat easier on the eyes, and takes the following form

zones:
  bracewel.net:
    bracewel.net:
      a:
        - 1.1.1.1
        - 2.2.2.2
      aaaa:
        - FF01:0:0:0:0:0:0:FB
      caa:
        - 0 issue "letsencrypt.org"
    www.bracewel.net:
      a:
        - 1.1.1.1
        - 2.2.2.2
      aaaa:
        - FF01:0:0:0:0:0:0:FB
  other.com:
    other.com:
      mx:
        - 10 mail.other.com

A quick method takes these definitions and creates a map of all the records for each domain. Record values are all in their DNS traditional presentation formatLike the format used in BIND style zone files, but minus all the domain. TTL IN TYPE cruft, just the good stuff! and will have a default TTL (3600) added, additionally the parser will also generate SOA and NS authority records for each respective zone pointing back to the servers DNS name (localhost by default).

Reloading zones

dns-workbench allows one of these definition files to be loaded at runtime or can gracefully reload the definitions via a built in HTTP API provided by net/http. This API exposes a single endpoint api/reload which accepts a JSON zones object by POST. If the zones object can be properly parsed the DNS server will wait until all current queries have been responded to before switching out the zones and blocking any new queries until this is doneThanks, mostly, to sync.RWMutex.. This process can be done with the dns-workbench reload command or, since it’s just JSON, from pretty much anywhere.

The server itself

The DNS server is run with the dns-workbench run command. All server configuration is handled by a small hand full of command line parameters, more information on these can be found by using dns-workbench help [command]More about this on the repo page., that the set the address and port to listen on for the DNS and API servers as well as various timeouts and a few other things.

Now why don’t we take a quick look at how it actually runs

Figure 2: Typical operation

Performance

Figure 3: My reaction Yeah not really sure to be honest, It’s extremely lightweight so in theory it should be pretty fast, I’ve only been using it locally under relatively low load so I have no concrete data. I could run a load tester against it but I can’t really be bothered… If you do let me know I guess?

Future

There are a bunch of cool features that I’d like to implement, namely much finer grained control over the zone reloading code, but but I don’t really have time at the moment so if anyone else wants to take a crack at adding things feel free to send a pull request!

The Problems with Corporate Identity Platforms (or what marketers call OAuth)

These days, more often than not, when you attempt to sign up for a new web application or service you are asked to choose from a set of corporate logos one of which you presumably already have an account with.

Fig a. The usual suspects

These are corporate identity platforms which provide you identity and information access management as a free service. They are, generally, based on the second version of the OAuth authorization standard which allows a service to authenticate your possession of a third-party account (Google, Facebook, Twitter, etc) as well as request certain information about the account that you authorize. This creates an ideal solution for developers who may not feel comfortable developing an authentication mechanism or storing user credentials on their own platform securely. They can instead use one of the open source OAuth libraries as a plug-and-play alternative.

Fig b. OAuth security model

The OAuth security model can be thought of as a centralized system, (a) in the figure above, where the central node is the third-party account and each connected leaf node is an account on a service using a unique token to authenticate itself with the third party and request data. The only credentials a service stores are the token and associated account name for the third-party service. If these are compromised the attacker cannot gain control of the third-party account nor any of the other services with which the account is associated, but merely the information the token was initially authorized to access, as shown as (b) in the figure above.

While this model is relatively robust in many situations by providing a high level of service and information compartmentalization it does create a single point of failure. Since the root node, the third-party account, has to be trusted by all associated services it becomes a single point of failure. This means that if an attacker can compromise the third-party account directly, instead of targeting the leaf services, they implicitly gain control over all the associated services, as shown as (c) in the figure above.

The web security mantra often goes, in part, use a different password for every service. This makes sense. If an account for one service is compromised we don’t want to kick off a domino-like effect where the attacker now has access all our accounts. OAuth seems to swing both ways here. On one hand the token, essentially replacing a password, should never be the same across services and should change often, but on the other hand we are still using a single password to authenticate ourselves to a whole bunch of websites – it’s just happening via an intermediary. If this password can be brute forced or is carelessly reused for another less security conscious service and subsequently compromised the dominoes start falling.

However the real problem with this model doesn’t solely lie with lone attackers or with user password sanity but with state actors. No matter how secure Google or Facebooks OAuth or general authentication implementations are, they are still required to bow to court orders demanding access to your account and therefore all associated services. Or, less obviously, they might have their data centers covertly infiltrated by the more nefarious three letter government agencies, like the NSA, to siphon authentication tokens and credentials directly. Documents leaked by Edward Snowden have shown that not only do these agencies have the capability to massively collect data this way but are actively doing so.

So what’s the best solution? Well… that’s a hard question. There are a number of off-line encryption-based password stores which allow users to abide by the previously mentioned mantra. But these fail the mantra by requiring a master password to decrypt the stored passwords. There has been work on some clients that segment passwords into groups and use different passwords to decrypt each grouping, but this still fails the mantra, just to a lesser extent.

It all eventually boils down to usability. If it were up to security experts everyone would just memorize (or use pen and paper encryption schemes to store) the myriad passwords required to sign in to the ever increasing number of services we use every day, but this just won’t work. While a number of off-line encryption-based services are making leaps and bounds on secure cross platform based tools, large corporations like Google, Facebook, and Twitter who push the OAuth model have massive platform lock-in. This means that out of the big five or so providers there is a high probability that you already have an account with one, if not more. The lock-in allows these providers to create a high quality and extremely consistent user experience across various platforms which is near impossible to match by open source or independent developers.

Corporations pushing the OAuth model directly profit from the growth of their platforms by profiling user activity and as such are less inclined to discuss these issues with their users. In the end the user has to weigh the convenience of using a model with the hazards it presents in their decision. Unfortunately this decision is often weighted by lack of information and lack of high quality alternatives.

Self decrypting scripts using Bourne and gpg

Dont trust that pimple faced junior sysadmin with your holy passwords in shell scripts? Want to store sensitive data in public scripts? (ಠ_ಠ) Trying to hide your source from those pesky insert scripting language here hackers? Something something something question?

No problem, just use the gpg cli tool, a Bourne shell script, and a weird Bourne/BASH trick to encrypt your shell script (or really any script that can be piped to its respective interpreter) with a passphrase using AES256 (or any of the other algorithms gpg supports) and then wrap it with a Bourne script that decrypts the ciphertext payload and pass it to the respective interpreter.

Ridiculous? yes. Hilarious? somewhat. Useful? questionable. In this post i’ll go over a Bourne tool I wrote, raziel, that automates this process, and how it works. Its extremely simple taking, currently, two paramenters PAYLOAD_FILE and OUTPUT_FILE and an optional argument -interpreter. The purpose of the final argument is to specify which interpreter (…mhm) the input script should be piped too once the wrapper has decrypted it, if it isn’t used the default is sh -.

View project on GitHub

The final Bourne file that is created contains an extremely basic shell wrapper and the input script ciphertext encoded using base64.

So how does it work?

The trick that makes this all work is actually quite simple. Both the Bourne and BASH interpreters forgo all the fancy AST, recursive token based parser nonsense and just evaluate scripts line by line.

Because of this we can append whatever random crap we want to the end of a shell script and as long as exit is called before the interpreter begins to parse the nonsense then everything will be just fine. All raziel needs to do then is come up with a chain of commands to take a script and encrypt it, base64 encode the ciphertext, and append this to the end of a wrapper that can decrypt and run it. An example output script might look like this (note: the appended data here is encoded using base64 but it could be appended as binary data, or uuencoded if you want, base64 is just easier on the eyes)

#!/bin/sh
# raziel loader - v0.0.2
run_payload() {
	local match=$(grep --text --line-number '^PAYLOAD:$' $0 | cut -d ':' -f 1)
	local payload_start=$((match + 1))
	local gpg_agent=$GPG_AGENT_INFO
	GPG_AGENT_INFO=
	tail -n +$payload_start $0 | base64 --decode | gpg -q -d | sh -
	GPG_AGENT_INFO=$gpg_agent
}
run_payload
exit 0
PAYLOAD:
jA0EAwMCIOBW/Y+CR7lgyekGuyeQHVv32eG/ZowkeaNRXVs6lHhqjvFUWlfWKKJ3edcta9OK
XLBnjv4ybVIVnYOB9yG8i+aOEJM9FoU2dS9a9HGMi+0CXdC91mBwRMr9eQRl67WLWSMP8Hti
AxrvztSxfD4EfezyOZus1XFyG9Mp8Bk6wCjnyll1A6UD1C04Rx17KkPPovvCxXq4h0OLrppa
8p+b9l0q51YTCOv84bO2WLSpUAEFY518+9jCDqkVBeJ66XXXtn0T4p1VOBEWlzY5TB3xspKG
tGMdZtsbAqfMQSwAjG+Na//8jRTZfH6ghlu0r+9I+1bip8x9a9KCQzxZvwlMPzdGQwqC1qsV
xjlQXcKbUFx7rl+HS7MY9A6qwfCL6sSHTNFlTPq/JtWKckvNYxyyMtcgqL3cjTd077yEi3Al
5uXEaQFiFEpwcPtAX6+7yf9t5FY3fqYtRje7DCDlf5ckP9B+CyZ01BcfOGDIcuOAE3q1/15R
3ioqp5d4K7NS9jwAd8Zeecf0HlAbCMpcz/k/iO7JoxSEVY8+UidPqXxKuV6lkVFb4Ept/BuB
YOP8Z6Rpr5w9LxX0Rtd7I3+bixuID9f+KvsK66nGqQrXRuXgT96gD4v5ulHRVsbJB5cTOm1J
wjE5IllHdhfsntNrIRI0Tmr1Rm4H6vx6D222f9J94c9hnKEa2GdRkYZaUsAkUu5X+7YXm5Vr
EUEbU+W1Ddbd52d8aRBSGzMvc7m9pX5XrsU1KHYM5yP4IduzRgO40gG9LhdOgsYc80wOle2Q
4j79DNrgdMb/9oRL+7iQ/ot1Pw7qXLNY9eOJq3BovA4WGauUK5pWQxcftUUioQ/rHNKC1MKY
TFrZF7LIk12wfftUj6zzka3R4vFlClup1Lxs0GDA4cRTi2yYAUhk6yR225R4fBrizKGXcRPN
2CYklk+ds5Pw7WkmdnphWiE4erXdYn8YfbkMWNqY9ABnO5hvhJ+hYRqy2KqbTWUl2e7p1GPk
LgUc

The encryption and decryption chains are both extremely simple, taking advantage of lots of UNIX pipes, gpg, and base64

# encrypt
cat $input_script | gpg -q -c -crypto-algo=AES256 | base64

# decrypt
match=\$(grep --text --line-number '^PAYLOAD:$' \$0 | cut -d ':' -f 1)
payload_start=\$((match + 1))
tail -n +$payload_start $0 | base64 --decode | gpg -q -d | $interpreter

Why Bourne?

BASH is super common, but Bourne is even more so and somewhat more POSIX-y, so since we want to create as portable wrappers as possible why not Bourne? Plus they are both relatively compatible…

Since raziel weighs in at just under 60 SLOC lets just take a look at an annotated version of the source

#!/bin/sh
#
#                          d8b          888
#                          Y8P          888
#                                       888
# 888d888 8888b.  88888888 888  .d88b.  888
# 888P       `88b    d88P  888 d8P  Y8b 888
# 888    .d888888   d88P   888 88888888 888
# 888    888  888  d88P    888 Y8b.     888
# 888    `Y888888 88888888 888  `Y8888  888
#
# raziel - v0.0.3
#   self-decrypting shell scripts

## Main function
#    this builds the wrapper and payload and combines them
#    into a single Bourne shell script
raziel() {
  # Boring argument check stuff
  if [ "$#" -eq "0" ] || [ "$#" -eq "3" ] || [ "$#" -gt "4" ]; then
    echo "Usage: $0 PAYLOAD_FILE OUTPUT_FILE [-interpreter INTERPRETER]"
    echo "  default interpreter is \"sh -\", but can be set to anything (..?)"
    echo "  (e.g. \"python3\", \"ruby\", \"perl\", etc)"
    exit 1
  fi

  # Disable gpg_agent temporarily so we don't get the annoying
  # pop-up box...
  gpg_agent="$GPG_AGENT_INFO"
  GPG_AGENT_INFO=
  
  # Encrypt the content of the input script using gpg
  input_script=`cat $1 | gpg -q -c -crypto-algo=AES256 | base64`
  
  # Restore gpg_agent information
  GPG_AGENT_INFO="$gpg_agent"
  
  # Get the filename of the output script
  output="$2"
  
  # Set the interpreter the input script should be piped to
  if [ "$#" -eq "4" ] && [ "$3" = "-interpreter" ]; then
    interpreter="$4"
  else
    interpreter="sh -"
  fi

  # Bourne shell wrapper that decrypts and executes the encrypted
  # payload
  run_template="#!/bin/sh
  # raziel loader - v0.0.2
  
  run_payload() {
    # Find the start of the 'PAYLOAD' section
    local match=\$(grep --text --line-number '^PAYLOAD:$' \$0 | cut -d ':' -f 1)
    
    # Set the start of the payload to the next line
    local payload_start=\$((match + 1))
    
    # Disable gpg_agent pop-up
    local gpg_agent="\$GPG_AGENT_INFO"
    GPG_AGENT_INFO=
    
    # Decrypt and execute payload with specified interpreter
    tail -n +\$payload_start \$0 | base64 --decode | gpg -q -d | $interpreter
    
    # Reset gpg_agent stuff
    GPG_AGENT_INFO="\$gpg_agent"
  }
  
  run_payload
  
  # Exit, this is very important as it will terminate the Bourne
  # interpreter before it begins to parse the payload
  exit 0"

  # Add wrapper to output file
  echo "$run_template" >>$output

# Add payload to output file
  echo "PAYLOAD:" >>$output
  echo "$input_script" >>$output

  # Make wrapper executable
  chmod +x $output
}

raziel "$@"

I mostly wrote raziel because it was the funnest way I could think of using this shell script trick, but if anyone comes up with an actually useful purpose please let me know! (I can be contacted via email)

Introducing theca

Over christmas I decided it would be fun to learn Rust, a (somewhat) new systems-ish language championed by Mozilla (Keep in mind the last time I used C, or really any systems language, was in 2006). Instead of jumping straight into a crazy custom stack project or something that would never get finished I decided to write a pretty simple tool I’ve been thinking about for quite a while.

No, it’s not a REST service, some Github integration, or an iOS application, it has nothing nothing to do with the cloud or data science…? Nope the result of my efforts is a cross platform (Linux/Darwin and you can probably build it on Windows but I haven’t tried…) CLI note taking tool called theca (the name comes from a Greek compendium of myths and legends titled The Bibliotheca), how exciting! Now there is no real reason for theca to be written in Rust, in fact most of the codebase could probably be written in half the LOCs by using an OO-language, but where is the fun in that?! and hey memo is written in C (plus just writing Python all the time can get pretty boring…) While some of the code is still rather dirty, and liable to change in the future, I haven’t changed the external functionality for quite a while and have a no plans for any new features so I decided now was good enough a time as any to release it to the world.

View project on GitHub

This post will be a relatively quick roundup of some of the things you can do with theca but I won’t cover everything. For a full (quite, very long) usage guide (with lots and lots of screenshots) as well as the TODO items etc check out the README.md file in the Github repo.

So what does theca do? Well it’s pretty simple, it takes notes! Beyond that here is a cursory list of features

  • easy note profile management
  • plaintext or 256-bit AES encrypted profiles
  • easily synchronizable profiles
  • edit notes using CLI editor (set via $VISUAL or $EDITOR)
  • transfer notes between profiles simply
  • plaintext or JSON output modes
  • search notes by keyword or regex pattern
  • simple external integration

And what are these profiles you speak of? theca utilizes what I call profiles which are, basically, just individual JSON files containing the notes for that profile and a flag indicating if the profile is encrypted. By default theca will look for profiles in either ~/.theca or the folder set in THECA_PROFILE_FOLDER and will then read either the default profile file default.json, the default profile file for the profile set in THECA_DEFAULT_PROFILE, or the profile file for the profile secified by -p PROFILE. Profile files follow the extremely simple naming scheme name.json.

Profiles are essentially used to segregate notes based on whatever reasoning you choose e.g. work, dev, secrets, thoughts-about-squids, diary, etc etc and can be either plaintext or encrypted.

Getting theca

There are two (well three) ways to get theca

  1. Get the precompiled nightly binaries

    I currently build and package nightly binaries (if anything in the repo has changed, and both of my laptops are on the same network, and I’m awake…) for x86_64-unknown-linux-gnu, i686-unknown-linux-gnu, x86_64-apple-darwin, and i686-apple-darwin that you can download and install using the packaged install.sh script or by using the hosted get_theca.sh script. Nightly (and in the future stable) packages are currently hosted here https://static.bracewel.net/theca/dist/ and can be somewhat unstable (previous nightlies are stored in subdirectories by date if the current nightly doesn’t work for some reason, eg. /theca/dist/theca-nightly-2015-02-13/ contains the very first packaged binaries I uploaded).

    a. Download and install manually

     # Choose the right architecture and platform and download from static.bracewel.net
     $ wget https://static.bracewel.net/theca/dist/theca-nightly-x86_64-unknown-linux-gnu.tar.gz
     ...
        
     $ tar vxzf theca-nightly-x86_64-unknown-linux-gnu.tar.gz
     ...
        
     $ cd theca-nightly-x86_64-unknown-linux-gnu
     $ ./install.sh
     ...
    

    b. Install using curl and get_theca.sh

     $ curl -s https://static.bracewel.net/theca/get_theca.sh | sh
    

    Currently get_theca.sh will download the most recent nightly package, but once I’ve released the first stable version of theca (1.0.0-stable) it will switch to automatically downloading and installing that package, at that point I’ll also add a --nightly flag which will allow you to revert get_theca.sh to the current behavior and download/install the most recent nightly.

    Some people may be (understandably) uncomfortable piping random scripts into curl, so I’d suggest you check out the two-stage installation process get_theca.sh uses (in get_theca.sh and package-installer.sh which is named install.sh in the packages) to calm your mind about what it is actually doing or build directly from the source hosted on Github

    You can also use the get_theca.sh script to uninstall theca if you wish that it no longer sullied your system

     $ curl -s https://static.bracewel.net/theca/get_theca.sh | sh -s -- --uninstall
    
  2. Build the binary from source

    If you want to build from source you’ll probably want to get the latest nightlies of rustc and cargo, these can be easily acquired by running (or again mangually install the binaries or build from source) or using multirust or whatever

     $ curl -s https://static.rust-lang.org/rustup.sh | sh
    

    Building is then as easy as

     $ git clone https://github.com/rolandshoemaker/theca.git && cd theca
     ...
    
     $ ./build.sh build [--release]
     ...
    
     $ sudo bash tools/build.sh install [--release, --man, --bash-complete, --zsh-complete]
     ...
    

    You should refer to the README in the github repo for what these flags do, but what most of them do should be fairly obvious.

Using theca

The available commands are relatively straight forward but lets quickly run through them

  • add - adds a note
  • edit - edits a note
  • del - delete single/multiple notes
  • search - searches notes by title/body using keyword/regex
  • <id> - displays note by id
  • new-profile - creates a new profile
  • list-profiles - lists all profiles in the current profile folder
  • encrypt-profile - encrypts the current profile
  • decrypt-profile - decrypts the current profile
  • transfer - transfers a note from the current profile to another
  • import - imports a note to the current profile from another

If no command is passed theca will by default list all the notes in the current profile. For more information about required/optional arguments for each command can be found by running theca --help, man theca, or reading the README.md file in the github repo.

A quick note on statuses and tags

Statuses are basically an extremely simple way of indicating the current status of a note (…yup pretty complicated) that takes the place of a more complex tagging system.

During initial development of theca I spent quite a bit of time trying to figure out which statuses I should include (or if I should allow completely custom statuses or none at all) and after playing with quite a few I ended up realising I only ever used three (well… two, if that).

  • No status at all (-n or --none, the default option for new notes)
  • Started (-s or --started)
  • Urgent (-u or --urgent)

These flags can be used when adding notes, editing notes, searching notes, and listing notes to either specify the status of the note or to filter lists by status. Tags can be replicated quite simply by prepending or appending a tag(…) to a note, as such theca add "(THECA) something about theca", and then you can do a keyword search for (THECA) to get all notes tagged as such.

Now back to how to use theca.

Lets add a bunch of notes to our default profile! (I’ve omitted it in the screenshot but you can also use the --editor flag to drop to the editor set/edit in either VISUAL or EDITOR to set the note body, but there is a gif at the end of this section of it in action)

Now lets view some notes, maybe do a search? If you don’t specify any command with theca it will list all the notes in the current profile, this can be reversed (-r or --reverse), sorted by the last time it was edited (d or --datesort), limited to a certain number of notes (-l 5 or --limit 5), or outputed as JSON(-j or --json).

By the way, there are two formats theca will use when printing most data, expanded mode (default) or condensed mode (-c flag). Condensed mode is a much more efficient way of printing out notes but its quite stripped down and can take a bit to get used to. In either mode if a note has an attached body the title will be prepended with (+).

Edit some notes?

And finally delete a bunch of notes.

Bonus --editor gif

Line formatter

theca containers a relatively simple lineformatter that is used to render output in the console in a way that (hopefully) won’t wrap, meaning it can be used in small console windows even if you have very long notes.

Non-default and encrypted profiles

Profiles are basically used to segregate notes by content, it’s up to use to decide how to actually use them. By default theca looks for profiles in the directory ~/.theca and will use the default profile (which can be either plaintext or encrypted) but you can add additional profiles with the command new-profile and specify which profile you want to use with other commands using the -p PROFILE option. If you don’t want the default profile to be default you can change it by setting the environment variable THECA_DEFAULT_PROFILE to the name of the profile.

Notes can easily be moved between profiles using the import and transfer commands, you currently cannot transfer a notes between two encrypted profiles or transfer notes from a plaintext profile to an encrypted profile, you can however import a note from a plaintext profile to an encrypted profile.

I’ve tended to (over the last few months) store general notes in the default profile and then have had separate profiles for projects e.g. theca-dev so notes that might need to stick around for a long time or are updated frequently don’t clutter up my general notes.

theca makes it relatively easily to create encrypted profiles and to encrypt/decrypt existing profiles. To create a encrypted profile you just need to use the -e flag which will prompt you to enter your key (which can also be specified using the -k KEY argument).

You can use the encrypt-profile and decrypt-profile commands to encrypt/decrypt plaintext/encrypted profiles you have already created respectively.

-e and/or -k can be used with the rest of the commands (add, edit, del, etc) to interact with encrypted profiles (otherwise an error will be thrown).

Syncing profiles

theca can take advantage of any existing syncing software you have setup (Dropbox, ownCloud, rsync…, etc) to share profiles between computers since the profiles are just flat JSON files. The default profile folder from which theca reads profiles can be set set via the environment variable THECA_PROFILE_FOLDER so to sync profiles you just need to set this variable to the path of some folder being synced and store your profiles there. For instance I use Dropbox to sync my profiles, and have my THECA_PROFILE_FOLDER set like so

$ export THECA_PROFILE_FOLDER=/home/roland/Dropbox/.theca

Integration

Since profiles are basically just flat JSON text files (well when not encrypted, but a simple Python key-derivation and decryption routine can be found in the README if you want to work with encrypted profiles) it’s relatively easy to write short scripts in your scripting language of choice (Python, Ruby, Perl, hell even BASH) to add, search, move, sort, transfer, import notes etcetcetc without having to add the feature in Rust to theca directly.

You can also output single notes, lists, and searches directly as JSON if you’d like to somehow tie that into your script by using the -j or --json flags.

Here are some random ideas you might want to implement that have crossed my mind

  • Email note reminders based on syntax at end of note bodies (e.g. !remind [email protected] in 1 day/1 week/1 month or !remind [email protected] every week)
  • Email weekly profile summaries
  • Email reminders based on notes with Urgent or Started statuses that haven’t been updated in a while
  • Add notes from emails to a special account
  • Note archiver (transfer notes older than x to the profile archive)
  • Store reports from CRON jobs in notes

Contributing

Contribute to theca on

Please do! A lot of the code is still quite messy and while it works there are a number of places that could use a bit of TLC. Pull requests for clean-ups, new features, or pretty much anything are more than welcome. For now development happens in the master branch but as soon as the stable Rust release comes out and I get around to releasing theca 1.0.0-stable development will move to the dev branch. Nightlies will always be built from the dev branch meaning they will be unstable-ish but will also contain the most recent features/bug fixes.