<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[httgp://]]></title><description><![CDATA[GP's own 127.0.0.1]]></description><link>https://httgp.com/</link><generator>Ghost 5.65</generator><lastBuildDate>Sat, 07 Mar 2026 08:54:37 GMT</lastBuildDate><atom:link href="https://httgp.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Signing commits from bot accounts and automation scripts in Github Actions]]></title><description><![CDATA[Learn how to sign and verify your Git commits in your Github Actions workflows.]]></description><link>https://httgp.com/signing-commits-in-github-actions/</link><guid isPermaLink="false">620fcd26dd5c147846c1629a</guid><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Fri, 18 Feb 2022 17:23:39 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://withblue.ink/2020/05/17/how-and-why-to-sign-git-commits.html?ref=httgp.com">It is very important to sign your Git commits</a>.</p><p>Although it is quite easy to generate your own GPG key and use it to auto-sign all your Git commits, it is difficult to sign commits coming from automation scripts, bot accounts and CI steps. For example, for many implicit <code>git</code> commits on Github Actions, the default <code>github-actions</code> bot user account is used for (unsigned) commits.</p><p>I recently ran into a compliance requirement that needed every single commit in the main branch to be signed and verified. We use <a href="https://semantic-release.gitbook.io/semantic-release/?ref=httgp.com">semantic-release</a> to make releases and commit back the latest version to our <code>package.json</code> version. With the signing requirement, we had to remove semantic-release from our CI workflow, as it uses its own bot account for commits &#x2013; which are unsigned.</p><p>However, I was determined to bring this step back while remaining compliant, so I did just that.</p><!--kg-card-begin: markdown--><h3 id="step-1-%E2%80%93-set-up-your-gpg-keys">Step 1 &#x2013; Set up your GPG keys</h3>
<p>Follow <a href="https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-new-gpg-key-to-your-github-account?ref=httgp.com">Github&apos;s guide on adding a new GPG key to your account</a> to first set up your keys.</p>
<blockquote>
<p>Personally, I did not want to use my personal GPG keys to sign commits at work, so I <a href="https://github.com/join?ref=httgp.com">created a dedicated bot account</a> on Github. Once I did that, I <a href="https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key?ref=httgp.com">generated new GPG keys</a> for the bot account. I finally added the GPG keys to the bot&apos;s Github account as explained above. Make sure to securely note down the passphrase (you should use one and not leave it blank!) and the private &amp; public keys. You&apos;ll need these for the next step.</p>
</blockquote>
<h3 id="step-2-%E2%80%93-set-up-gpg-passphrase-private-key-as-secret">Step 2 &#x2013; Set up GPG passphrase &amp; private key as secret</h3>
<p>The GPG passphrase and the private key need to be set up as <a href="https://docs.github.com/en/actions/security-guides/encrypted-secrets?ref=httgp.com">encrypted secrets</a> in the Github repository of your choice. I named them <code>BOT_GPG_PASSPHRASE</code> and <code>BOT_GPG_PRIVATE_KEY</code>.</p>
<h3 id="step-3-%E2%80%93-configure-commits-to-be-signed-with-the-gpg-key">Step 3 &#x2013; Configure commits to be signed with the GPG key</h3>
<p>Adjust your Github Actions workflow to import the GPG key and use it to sign your commits.</p>
<pre><code class="language-yaml">name: Release

on:
  push:
    branches:
      - main

jobs:
  release:
    name: release
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v2
      with:
        persist-credentials: false # This is important if you have branch protection rules!
    - name: Import bot&apos;s GPG key for signing commits
      id: import-gpg
      uses: crazy-max/ghaction-import-gpg@v4
      with:
        gpg_private_key: ${{ secrets.BOT_GPG_PRIVATE_KEY }}
        passphrase: ${{ secrets.BOT_GPG_PASSPHRASE }}
        git_config_global: true
        git_user_signingkey: true
        git_commit_gpgsign: true
    - name: Change some files
      run: echo &apos;adding a new commit now&apos; &gt;&gt; README.md
    - name: Commit changes to README.md file
      run: git commit -m &quot;this is bot&quot; README.md
      env:
        GITHUB_TOKEN: ${{ secrets.OSLASH_BOT_GITHUB_TOKEN }}
        GIT_AUTHOR_NAME: ${{ steps.import-gpg.outputs.name }}
        GIT_AUTHOR_EMAIL: ${{ steps.import-gpg.outputs.email }}
        GIT_COMMITTER_NAME: ${{ steps.import-gpg.outputs.name }}
        GIT_COMMITTER_EMAIL: ${{ steps.import-gpg.outputs.email }}
</code></pre>
<p>Here&apos;s a working example of such a workflow &#x2013; <a href="https://github.com/getoslash/eslint-plugin-tap/blob/d49550e6516c939a7791ba80c2e378dca17dc514/.github/workflows/release.yml?ref=httgp.com">getoslash/eslint-plugin-tap</a>, and a <a href="https://github.com/getoslash/eslint-plugin-tap/commit/6dae5d9aa4f7ff14636ea3d3b91f264d2946433f?ref=httgp.com">commit</a> that was made from an automated CI step.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Why I've had 7 jobs in 11 years]]></title><description><![CDATA[My personal experiences "job-hopping", and why it isn't always so bad. I also explore some ideas for employers to build a better workplace.]]></description><link>https://httgp.com/why-i-have-had-7-jobs-in-11-years/</link><guid isPermaLink="false">60a10d52e8fceb445b2b51f2</guid><category><![CDATA[Lessons]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Tue, 18 May 2021 10:45:57 GMT</pubDate><media:content url="https://httgp.com/content/images/2021/05/pexels-henri-mathieusaintlaurent-5898311.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://httgp.com/content/images/2021/05/pexels-henri-mathieusaintlaurent-5898311.jpg" alt="Why I&apos;ve had 7 jobs in 11 years"><p>At a glance, that can look bad. Any potential employer is going to think <em>&quot;Oh this guy has just been chasing money and/or a promotion!&quot;</em>. Delve deeper, and you&apos;ll know my story, and I&apos;m sure countless others&apos;, is mired with toxic workplaces and awful bosses.</p><h3 id="my-little-journey">My little journey</h3><p>I got my first taste of computers when I was around 10, and I started building simple quiz apps in Visual Basic using a friend&apos;s computer (we couldn&apos;t afford one). When I went to high school, I finally got my own computer and started building tiny apps with C#; I was also <a href="https://lifehacker.com/google-dns-helper-offers-no-commitment-google-dns-try-o-5431251?ref=httgp.com">featured on LifeHacker</a>. I built websites for college events using Flash and PHP, and for my final year project, I built a robot that could be driven via commands from a <a href="https://wikipedia.org/wiki/Serial_port?ref=httgp.com">serial port</a>. I finally landed a job straight out of college at one of the most popular software companies (at the time) in the country.</p><p>I spent 3 years at my first job. I was young, and I was learning some cool stuff &#x2014; I was part of a research group that worked closely with IBM to come up with a plan to port an incredibly popular core banking product&apos;s database from Oracle to DB/2. But working for a large multinational corporation, it became apparent that I wouldn&apos;t be able to create impact in many years, and I was being criminally underpaid. I had been teaching myself Python on the side (I&apos;d written a bunch of scripts to automate parsing of DB/2 logs, cutting down the time my team was spending on that effort by almost 90%), and I was ready to go look for Python jobs.</p><p>And I was a Python/Django/jQuery developer for a while. Node.js was starting to become a thing, and I became a Node.js developer and found a job as one. I was also starting to work on Angular. Soon I ended up at a fantastic place that would become my engineering foundation and my baptism-by-fire; I was working on Scala, Kafka, Mesos, Ansible, Terraform &amp; Packer at scale. I was working on unbelievably cool stuff like working around <a href="https://paambaati.github.io/rendering-at-scale/?ref=httgp.com#/3">bot mitigation techniques</a> that were sold by Distil and Akamai. This was also where I was finally in a room full of people smarter than me. The company, however, ran out of runway and was soon acquired by somebody else, and the transition was incredibly painful. I left the job really hurt, and hastily picked another one as a rebound.</p><p>Like most rebound relationships, it did not last. It was an incredibly toxic workplace &#x2014; one of the founders was an insufferable bully who publicly berated people, pay disparity was huge, personal boundaries were disrespected and delivery expectations were so stressful that at some point I started <em>literally</em> losing my hair. I quit that job in 7 months.</p><p>I found another job, and by this time, experience and wisdom informed what I specifically wanted. I wanted a job where &#x2014;</p><ol><li>Everybody built stuff in the open.</li><li>People would be treated with kindness and compassion.</li><li>I could build inclusive and diverse teams filled with smart people that were paid equitably.</li><li>I could <a href="https://httgp.com/how-to-interview-a-senior-engineer/">attempt to fix the fundamentally-broken hiring process</a>.</li><li>I could help solve interesting problems.</li></ol><p>Armed with this list of goals, I started work in earnest at my last job. While the first year was spent entirely as the sole contributor building the platform frontend, it was time to grow the team, and I&apos;d soon find out why I&apos;d leave this job too; management reneged on promises of equitable pay, information trickled-down from the top, new product managers were solving problems that didn&apos;t exist, and I was frequently left out of hiring decisions and other important meetings.</p><h3 id="whos-fault-is-it-anyway">Who&apos;s Fault Is It Anyway?</h3><p>When trying to analyze what <em>I</em> could&apos;ve done to solve these problems, I&apos;m constantly reminded of what an old boss told me &#x2014;</p><blockquote>&quot;You&apos;re just running away from your problems. If something isn&apos;t going your way, it&apos;s your responsibility to fix it!&quot;</blockquote><p>There is <em>some</em> truth to this &#x2014; I was technically running away from those problems. But where I disagree is that was somehow <em>my</em> problem. In that particular case, I decided to leave because I was brought in to modernize and turn a maintenance shop into a development shop. This meant I had to bring in new processes, mentor existing employees and try to shape them into full-fledged engineers, and get a hiring budget to attract good talent. I wasn&apos;t able to do any of this because the company&apos;s overseas branch didn&apos;t like us doing any new development, the team was very resistant to all the new changes and I couldn&apos;t hire anyone I liked. Throughout all of this, I wasn&apos;t getting any support from the people who hired me either, so I was just stuck; until, of course, I decided to leave.</p><blockquote>People don&apos;t leave bad jobs, they leave bad managers.</blockquote><p>Should one try to solve problems at their organization? Yes, of course. Is my team falling behind on schedule? I&apos;ll try to understand why we&apos;re late, identify bottlenecks and try to fix those. Is someone on my team not being productive? I&apos;ll talk to them, see if they need a break or therapy, and I&apos;ll help them get that. But I can do all of this only if I am supported, and know that the company cares about this too. Maybe it is my <a href="https://www.adhd.org.nz/adhd-and-an-unusual-sense-of-fairness.html?ref=httgp.com">ADHD brain&apos;s acute sense of fairness</a>, but I need my employer to also care about some of the things I care about &#x2014; diversity, inclusivity, fair &amp; considerate hiring processes, kindness and compassion not just for customers but for employees too, mental health wellness and equitable pay. If they&apos;re only focussed on turning a profit (like this one job I had where a core value, enshrined in all their decor &amp; company swag, was &quot;Results over reason&quot;), I&apos;d be incredibly miserable there; you would too.</p><h3 id="end-of-the-rope">End of the rope</h3><p>At every job, I&apos;ve fought very hard to fix those things. I&apos;ve been explicit about how I want certain things to be done. I&apos;ve pleaded, begged and persuaded bosses to treat and pay people better. I&apos;ve battled long and hard for better working hours for my teams, and I&apos;ve learnt new ways of talking to non-technical folks to help them understand and appreciate what software engineers do. I&apos;ve had honest and thoughtful responses for anyone hiring me that had concerns about me being a &quot;flight risk&quot;. All this said, I&apos;ve left most of those jobs when I no longer could pay the cost of trying to fix everything.</p><h3 id="stage-5-acceptance">Stage 5: Acceptance?</h3><p>My partner keeps telling me that the only way I could find happiness at a job is if I ran my own company and did everything the way I want it to be.</p><p>But I&apos;ve seen great employers that know how to treat their folks really well. I&apos;ve been a part of at least 1 team where I&apos;ve experienced a version of happiness that is close to my ideal. And I refuse to believe that I can&apos;t recreate that again.</p><p>That said, I do realize that every organization, at the end of the day, exists to generate wealth for its founders, management and shareholders. Will a company exit by selling to someone that pays lesser to the shareholders while elevating the employees? Most probably not. I&apos;m trying to come to grips with this hard reality by detaching more from my company as an identity and by looking for happiness outside of my job. I am also advising all of my teammates to do the same &#x2014; I encourage them to pursue fulfilling hobbies, to exercise and eat right, to spend more time with their families, friends &amp; pets and to volunteer.</p><h3 id="how-to-not-repeat-my-mistakes-and-instead-make-your-own">How to not repeat my mistakes and instead make your own</h3><p>So how does one avoid the traps of a bad job in the first place? I&apos;ve learnt that there is no hard science to this, but I now do a few things &#x2014;</p><ol><li>When approaching a new job, I do a ton of research into the company and its founders. What is their story? What does their social media look like? Are there any obvious red flags? Do they engage in the things I care about?</li><li>I read their job listings. Do they talk about the realties of the job? (<a href="https://www.notion.so/Mighty-is-hiring-945d3168d3e34a37883ca4d823ed734f?ref=httgp.com">Mighty&apos;s hiring document</a> is a great example where they&apos;re refreshingly honest about the importance of raw hours). Do they have a near-future roadmap of the role?</li><li>When I start talking to the founders, I ask pointed questions about their mission and their long-term goals, their exit strategy, where they see my adding value immediately, in what ways the founders (if there&apos;s more than 1) are different and same, what their team could be doing better right now, their CSR plan, have they had to fire someone (and if yes, why), how they measure employee happiness (and how they define it), how much support they provide for new parents and how they plan to do appraisals. I&apos;m always making jokes, so I see if they get my sense of humour.</li><li>When interviewing, I also ask to meet the teams I&apos;ll be working with. This gives me an opportunity to understand more about the people themselves and to start building a relationship.</li><li>I am very honest about all the reasons I&apos;ve quit past jobs, and I explicitly state the things I want and deeply care about. I&apos;ve had founders nod their head along at this stage (because they want me onboard, so they&apos;ll agree to anything I was saying), so I&apos;m extra careful here about probing deeper and making sure they&apos;re earnest and actually mean it.</li><li>If there are any red flags along the way, I try not to fall prey to the sunk cost fallacy (&quot;Oh I&apos;ve spent so much time in this process, I&apos;d hate to start over again!&quot;) and still withdraw from the process.</li><li>I do my own due diligence &#x2014; I talk to current and previous employees if possible, and ask specific questions about the company culture and the reasons for their departure.</li><li>I listen to my gut. If something feels off, I walk away.</li></ol><h3 id="a-note-on-being-able-to-afford-being-this-way">A note on being able to afford being this way</h3><p>I have to point out that this isn&apos;t financially viable in the long run, nor is it even possible in the first place if I wasn&apos;t already incredibly privileged. I would be remiss if I didn&apos;t acknowledge the things that have allowed me to take such principled decisions.</p><ol><li>My parents were first-generation college (or even school) graduates, and they worked hard to give me a middle-class upbringing. I never had to worry about food, clothing, a roof over my head, school fees, saving for a sibling&apos;s wedding or taking care of my parents. Consider all of this, and I was incredibly privileged to have a stable household.</li><li>My partner is employed, and we don&apos;t have kids. Our <a href="https://www.investopedia.com/terms/d/dinks.asp?ref=httgp.com">DINK</a> lifestyle has de-risked this a lot.</li><li>I&apos;m confident in my employability skills, so I could quit and take breaks in between jobs without fretting over finding a new one.</li></ol>]]></content:encoded></item><item><title><![CDATA[Conditional AWS EC2 resources in Terraform]]></title><description><![CDATA[Learn how to set up AWS EC2 on-demand and spot resources conditionally the right way, down to your EC2 tags.]]></description><link>https://httgp.com/conditional-ec2-resources-terraform/</link><guid isPermaLink="false">5e70887abbfe303945cf7ffa</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Code]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Tue, 17 Mar 2020 09:53:38 GMT</pubDate><media:content url="https://httgp.com/content/images/2020/03/terraform-conditional-resources-featured-image.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><div class="flex-centered">
	<img data-src="/content/images/2020/03/terraform-conditional-resources.svg" class="lazyload blur-up" alt="Conditional AWS EC2 resources in Terraform" width="400" height="400" src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA0MDAgNDAwIj48ZmlsdGVyIGlkPSJiIj48ZmVHYXVzc2lhbkJsdXIgc3RkRGV2aWF0aW9uPSIxMiIgLz48L2ZpbHRlcj48cGF0aCBmaWxsPSIjYzNjM2NjIiBkPSJNMCAwaDQwMHY0MDBIMHoiLz48ZyBmaWx0ZXI9InVybCgjYikiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC44IC44KSBzY2FsZSgxLjU2MjUpIiBmaWxsLW9wYWNpdHk9Ii41Ij48cGF0aCBmaWxsPSIjNjg2ODgwIiBkPSJNMjM3IDEyOEw4NSAxMSA1NCAyNDN6Ii8+PHBhdGggZD0iTTEzOCAxOTNoOTF2MjBoLTkxeiIvPjxlbGxpcHNlIGZpbGw9IiNmZmYiIHJ4PSIxIiByeT0iMSIgdHJhbnNmb3JtPSJyb3RhdGUoLS40IDc4NyAtMjAxMDkpIHNjYWxlKDIxMy4zODk0MyAyMy43MDg2NikiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiBjeD0iMTMzIiBjeT0iMjUxIiByeD0iMjU1IiByeT0iMjYiLz48cGF0aCBmaWxsPSIjMzEzMTVlIiBkPSJNMzYgNThoNTV2OTdIMzZ6Ii8+PHBhdGggZmlsbD0iIzBiMDk3NSIgZD0iTTIyOSA4Ny44bC01NC4yLTYyLjVMNjEgNDEuOGw5MS4zIDguM3oiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KC0xNy44MTcxIC43MTU2IC02LjQ2MDU3IC0xNjAuODU0MDcgMjQxLjcgMTAyLjcpIi8+PGVsbGlwc2UgZmlsbD0iI2ZmZiIgcng9IjEiIHJ5PSIxIiB0cmFuc2Zvcm09Im1hdHJpeCgyLjgyOTE1IDEwMC43MjA4MyAtOC4wNDU2MSAuMjI2IDY2LjcgOTUpIi8+PC9nPjwvc3ZnPg==">
</div>
<img src="https://httgp.com/content/images/2020/03/terraform-conditional-resources-featured-image.png" alt="Conditional AWS EC2 resources in Terraform"><p></p><!--kg-card-end: html--><!--kg-card-begin: markdown--><p>One of the things that every Terraform developer will end up having to do at some point is conditionally bringing up <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html?ref=httgp.com">on-demand</a> or <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html?ref=httgp.com">spot</a> instances in their AWS infrastructure, and I&apos;m going to show you how to do that.</p>
<h3 id="thechallenge">The challenge</h3>
<p>The main challenge in doing this is not the bring up itself, but propogating your EC2 tags down to the instance. This is because to bring up a spot instance, you&apos;ll need to use <a href="https://www.terraform.io/docs/providers/aws/r/spot_instance_request.html?ref=httgp.com"><code>aws_spot_instance_request</code></a>, which takes tags as an argument but applies them to the spot request itself and not the instance.</p>
<h3 id="settingstagsonthespotinstance">Settings tags on the spot instance</h3>
<p>The solution involves using <a href="https://www.terraform.io/docs/configuration/resources.html?ref=httgp.com#count-multiple-resource-instances-by-count"><code>count</code></a> to conditionally bring up the instance, creating an IAM policy that allows the instance to create tags and then running <a href="https://www.terraform.io/docs/provisioners/remote-exec.html?ref=httgp.com"><code>remote-exec</code></a> to SSH into the instance to create its own tags.</p>
<h4 id="prerequisites">Prerequisites</h4>
<ol>
<li>AWS AMI with <a href="https://aws.amazon.com/cli/?ref=httgp.com">AWS CLI</a> installed.</li>
<li>Terraform &gt;= 0.12.</li>
</ol>
<h5 id="variablestf"><code>variables.tf</code></h5>
<pre><code class="language-hcl">variable &quot;instance_type&quot; {
  type        = string
  description = &quot;Instance type for your AWS instance&quot;
}

variable &quot;ami_id&quot; {
  type        = string
  description = &quot;AMI ID for your AWS instance&quot;
}

variable &quot;ssh_key&quot; {
  type        = string
  description = &quot;EC2 SSH keypair for your AWS instance&quot;
}

variable &quot;instance_lifecycle&quot; {
  type        = string
  description = &quot;Lifecyle of your AWS instance. Example: ondemand, spot&quot;
  default     = &quot;spot&quot;
}
</code></pre>
<h5 id="ec2tf"><code>ec2.tf</code></h5>
<pre><code class="language-hcl">resource &quot;aws_instance&quot; &quot;ec2_ondemand&quot; {
  count                       = var.instance_lifecycle == &quot;ondemand&quot; ? 1 : 0
  ami                         = var.ami_id
  instance_type               = var.instance_type
 
  tags = {
    Name        = &quot;my-instance&quot;
    Lifecycle   = var.instance_lifecycle
  }
}

resource &quot;aws_spot_instance_request&quot; &quot;ec2_spot&quot; {
  count                       = var.instance_lifecycle == &quot;spot&quot; ? 1 : 0
  wait_for_fulfillment        = true
  spot_type                   = &quot;one-time&quot;
  ami                         = var.ami_id
  instance_type               = var.instance_type

  tags = {
    Name        = &quot;my-instance&quot;
    Lifecycle   = var.instance_lifecycle
  }

  # Workaround to make sure the spot request tags are propogated down to the instance itself.
  provisioner &quot;remote-exec&quot; {
    connection {
      user        = &quot;ubuntu&quot; # This might change for your OS of choice.
      host        = self.private_ip
      private_key = file(&quot;~/.ssh/${var.ssh_key}.pem&quot;)
    }

    inline = [
      join(&quot;&quot;, formatlist(&quot;aws ec2 create-tags --resources ${self.spot_instance_id} --tags Key=\&quot;%s\&quot;,Value=\&quot;%s\&quot; --region=${var.region}; &quot;, keys(self.tags), values(self.tags)))
    ]
  }
}

# If you&apos;re going to refer to the created instance anywhere else, you should now use `data.aws_instance.ec2_instance`
data &quot;aws_instance&quot; &quot;ec2_instance&quot; {
  depends_on = [aws_instance.ec2_ondemand, aws_spot_instance_request.ec2_spot]
  filter {
    name   = &quot;tag:Name&quot;
    values = [&quot;my-instance&quot;]
  }
  filter {
    name   = &quot;tag:Lifecycle&quot;
    values = [var.instance_lifecycle]
  }
}
</code></pre>
<h5 id="iamtf"><code>iam.tf</code></h5>
<pre><code class="language-hcl">resource &quot;aws_iam_role&quot; &quot;iam_role&quot; {
  name = &quot;my-iam-role&quot;

  lifecycle {
    create_before_destroy = true
  }

  force_detach_policies = true

  assume_role_policy = &lt;&lt;EOF
{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
      &quot;Principal&quot;: {
        &quot;Service&quot;: &quot;ec2.amazonaws.com&quot;
      },
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Sid&quot;: &quot;&quot;
    }
  ]
}
EOF

}

resource &quot;aws_iam_policy&quot; &quot;iam_role_policy_ec2tags&quot; {
  name        = &quot;my-instance-create-tags-policy&quot;
  description = &quot;Allow EC2 instances to create Tags for themselves&quot;

  # The policy is for &apos;*&apos; rather than the own instance because EC2 doesn&apos;t allow us to filter by self.
  policy = &lt;&lt;EOF
{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
          &quot;Action&quot;: &quot;ec2:CreateTags&quot;,
          &quot;Effect&quot;: &quot;Allow&quot;,
          &quot;Sid&quot;: &quot;AllowEC2InstanceCreateTags&quot;,
          &quot;Resource&quot;: &quot;*&quot;
        }
    ]
}
EOF

}

resource &quot;aws_iam_policy_attachment&quot; &quot;iam_role_policy_ec2tags&quot; {
  name       = &quot;iam_role_policy_ec2tags&quot;
  roles      = [aws_iam_role.iam_role.name]
  policy_arn = aws_iam_policy.iam_role_policy_ec2tags.arn
  lifecycle {
    create_before_destroy = true
  }
}

</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building a platform team at a fast-moving startup]]></title><description><![CDATA[My opinions on how to build a strong platform team for scale at a rapid-growth and fast-moving startup (or any organization).]]></description><link>https://httgp.com/building-a-platform-team/</link><guid isPermaLink="false">5daeb9ea4890c5723324062f</guid><category><![CDATA[Lessons]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Tue, 22 Oct 2019 09:31:00 GMT</pubDate><media:content url="https://httgp.com/content/images/2019/10/platform.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><div class="flex-centered">
    
    <img data-src="/content/images/2019/10/platform.svg" alt="Building a platform team at a fast-moving startup" class="lazyload blur-up" height="512px" width="512px" src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj48ZmlsdGVyIGlkPSJiIj48ZmVHYXVzc2lhbkJsdXIgc3RkRGV2aWF0aW9uPSIxMiIgLz48L2ZpbHRlcj48cGF0aCBmaWxsPSIjZDRkN2U5IiBkPSJNMCAwaDUxMnY1MTJIMHoiLz48ZyBmaWx0ZXI9InVybCgjYikiIHRyYW5zZm9ybT0ibWF0cml4KDIgMCAwIDIgMSAxKSIgZmlsbC1vcGFjaXR5PSIuNSI+PGVsbGlwc2UgZmlsbD0iIzA3MThiNiIgcng9IjEiIHJ5PSIxIiB0cmFuc2Zvcm09Im1hdHJpeCguOTcyOTUgLTcxLjY4Nzk0IDE5LjI5MDE1IC4yNjE4IDQ4LjcgMjE1LjUpIi8+PHBhdGggZmlsbD0iIzAwMTBiOCIgZD0iTTE1NiAxNTRoNDB2OTZoLTQweiIvPjxlbGxpcHNlIGZpbGw9IiMxNDNkY2UiIGN4PSIxMDUiIGN5PSIxNjIiIHJ4PSIxMTciIHJ5PSIxMiIvPjxlbGxpcHNlIGZpbGw9IiNmZmYiIHJ4PSIxIiByeT0iMSIgdHJhbnNmb3JtPSJyb3RhdGUoNDguMiA1NCAyMDcpIHNjYWxlKDczLjUxNTg3IDgyLjE3MDUpIi8+PGVsbGlwc2UgZmlsbD0iI2ZmZiIgcng9IjEiIHJ5PSIxIiB0cmFuc2Zvcm09Im1hdHJpeCgtMTguMTIwOTQgODYuODk0NzYgLTM1LjgyNzg2IC03LjQ3MTUgMjYuMSA2NS42KSIvPjxlbGxpcHNlIGZpbGw9IiNmZmYiIGN4PSIyNDYiIGN5PSIyMDQiIHJ4PSI1MSIgcnk9IjUxIi8+PGVsbGlwc2UgZmlsbD0iI2ZmZiIgcng9IjEiIHJ5PSIxIiB0cmFuc2Zvcm09InJvdGF0ZSgxNzYuOCA1Mi4yIDExMCkgc2NhbGUoNDEuNzc3MTYgMjcuMjAwNTIpIi8+PHBhdGggZmlsbD0iIzlmOTQ4ZSIgZD0iTTU5LjUgMzdoMzl2NjBoLTM5eiIvPjwvZz48L3N2Zz4=">
</div>
<br><!--kg-card-end: html--><!--kg-card-begin: markdown--><h3 id="whatisaplatformteam">What is a platform team?</h3>
<img src="https://httgp.com/content/images/2019/10/platform.png" alt="Building a platform team at a fast-moving startup"><p>Although most organizations tend to describe a platform team slightly differently, on a fundamental level, a platform team  is the team that has centralized expertise and ownership. At the same time, it should allow a healthy amount of control and customization by product teams. The platform team is also broadly responsible for improving the organization&apos;s flexibility and developer productivity.</p>
<h3 id="howtobuildaplatformteam">How to build a platform team</h3>
<h5 id="identifyrolesresponsibilities">Identify roles &amp; responsibilities</h5>
<p>Answering some questions like what are the team&apos;s roles (which pieces of the software puzzle will they build/own/maintain) and responsibilities (what they&apos;re accountable for, what they can do to build a working relationship with other teams, how often they communicate with other stakeholders, etc.)</p>
<h5 id="writeamissionandorvisionstatement">Write a mission and/or vision statement</h5>
<p>The team&apos;s mission statement should help align the team on its core goals and build a sense of camaraderie and shared ethos.</p>
<h5 id="poachgrowtalentinternally">Poach &amp; grow talent internally</h5>
<p>Often at an organization&apos;s growth phase, there are pockets of tribal knowledge and <abbr title="Subject Matter Experts">SMEs</abbr> that are invaluable in understanding a system. They are the flesh-and-blood documentation of a system, and can be much more quicker at adding value to a platform team than a new hire.</p>
<h5 id="buildtrust">Build trust</h5>
<p>Many engineers are often resistant to change and care deeply about things they&apos;ve built. Often times the platform team can come in and re-architecture or change things, leading to some friction. The platform team needs to build trust first before they can get buy-in from the development team; after all, they&apos;re the platform team&apos;s customers.</p>
<h5 id="laydownbestpractices">Lay down best practices</h5>
<p>Unlike development teams, one of the stated goals of a platform team is very good documentation and detailed guides. This helps immensely in &#x2014;</p>
<ol>
<li>Boosting developer productivity.</li>
<li>Clearly communicating the team&apos;s vision.</li>
<li>A solid framework to iterate upon.</li>
</ol>
<p>Documentation is the lifeblood of the team and it cannot be seen as a separate task in a sprint; it has to be given equal importance as (sometimes even greater than) code.</p>
<h5 id="takeownershipandbuildaccountability">Take ownership and build accountability</h5>
<p>As the platform team starts to take shape, taking voluntary ownership of either legacy systems or new systems that they&apos;ve built is important. This frees up developers to focus on external customers&apos; deliverables. Becoming accountable shows that the team is growing and maturing, and that they&apos;re a key part of the chain of trust.</p>
<h5 id="establishacommunicationapi">Establish a communication API</h5>
<p>Crisp &amp; inclusive communication of what the team is working on, what it plans to achieve, the risks it perceives and helpful tips to developers (even as <abbr title="Just In Time">JIT</abbr>) at a predictable cadence is very important to build more trust and effectively participate in larger efforts inside the organization. This can also double as free PR for the team!</p>
<h5 id="createafeedbackloop">Create a feedback loop</h5>
<p>Every member of the team can appreciate &#x2014;</p>
<ol>
<li>An external frame of reference.</li>
<li>Validation for their ideas.</li>
<li>Gratitude from seeing their work positively impact a developer or a product.</li>
<li>Inputs that drive them to build cooler stuff.</li>
</ol>
<p>A feedback loop manages to keep the platform team engaged with their customers (developers and product teams). It offers valuable insight into their core demographic, and helps not silo them away from the rest of the organization.</p>
<h5 id="publishdatametricsandinternalabbrtitleservicelevelagreementsslasabbr">Publish data, metrics and internal <abbr title="Service Level Agreements">SLAs</abbr></h5>
<p>The team should regularly publish data on &#x2014;</p>
<ol>
<li>Development progress.</li>
<li>Technical backlog.</li>
<li>Issues fixed.</li>
<li>Performance improvements made.</li>
<li>Security threats mitigated.</li>
</ol>
<p>A dashboard or a status page can help communicate the team&apos;s effectiveness to everyone at the organization. Additionally, at a certain level of maturity and stakeholder confidence, it is worthwhile setting up an internal SLA that holds the team to a higher standard.</p>
<h5 id="enableaselfserveplatform">Enable a self-serve platform</h5>
<p>As the tools and the platform itself matures, they should evolve in the direction of automation. This becomes a force-multiplier for the team itself, making room for even faster iterations, bigger goals and a more stress-free environment for the platform engineer.</p>
<h3 id="challengespossiblesolutions">Challenges &amp; possible solutions</h3>
<p>While there&apos;s a lot of compelling arguments for building a platform team, there are also a few challenges that they can face.</p>
<h5 id="disillusionmentdisenfranchisement">Disillusionment &amp; disenfranchisement</h5>
<p>More times than often, the platform team is a disruptor. It has the wisdom of hindsight and knowledge of past mistakes to learn from, and it aims to fix a lot of things. This can often understandably lead to a sense of disillusionment (and perhaps a sense of betrayal) in some engineers that wrote legacy code.</p>
<p>This can be handled by bringing on those same engineers to the platform team either full-time or as loans, giving them an opportunity to fix their old mistakes (because a good engineer will always do that happily).</p>
<h5 id="fragmentationandmoresilos">Fragmentation and more silos</h5>
<p>A classic growing pain in any startup is fragmentation of expertise, and ever-so-slightly increasing over-specialisation. As more teams are formed with a more specific focus, knowledge tends to become hidden in pockets more and more.</p>
<p>To be honest, I personally don&apos;t have a good solution for this problem. Perhaps it is wiser to accept this as a part of growth and evolution. What can perhaps make it easier to gain wider acceptance is clearly set expectations upfront. Cross-functional knowledge sharing can be built with more documentation (perhaps tailored to team-specific personas) more forced collaboration (internal hackathons, regular presentations, etc.).</p>
<h5 id="lossindevelopmentvelocity">Loss in development velocity</h5>
<p>High development velocity is highly favoured in the world of agile startups. Developers tend to view any additional process as bureaucracy and view them as a waste of time. The platform team has its own deadlines, but also has to efficiently manage the time that the development teams&apos; output spends in its quality gates (compliance and regulatory requirements, specific Dev(Sec)Ops tool checks, etc.).</p>
<p>Striking a good balance between keeping the development teams moving forward but also making sure all best practices checks have passed is a skill that only be developed on-premises, and it is very finely tuned to the organization&apos;s culture and it&apos;s people.</p>
<h5 id="growingtechnicaldebt">Growing technical debt</h5>
<p>Technical debt is the bane of developers&apos; existence, and we all wish there was a magic wand we can wave to pay it all off. However, in the real world, the onus of clearing it often falls on the platform team. This can be a double-edged sword, as it can often be the draw for some engineers, but can be a deterrent for others.</p>
<p>This can be solved via regular internal hackathons based on debt. Perhaps additional incentives (not necessarily in monetary form) can be given to independent contributors and teams that clear their backlog. This can also be gamified &#x2014; for example, a team can gift anybody <code>X</code> <em>&quot;brownie points&quot;</em> for clearing off some debt item; <em>&quot;brownie points&quot;</em> is a placeholder for an incentive of some kind.</p>
<h5 id="impedancemismatchesinprioritization">Impedance mismatches in prioritization</h5>
<p>Following best practices will not always be the top priority for a lot of development teams, while it is the other way around for a platform team. There will almost always be some impedance mismatch between the two teams because of this and every member has to be sensitive to this.</p>
<p>Project managers can help strike a balance here. The platform team can also groom champions or ambassadors that can rally around the team around compliance with the best practices checklist.</p>
<h5 id="trustbuilding">Trust-building</h5>
<p>Building trust is of prime importance, doubly so for a new team. Also compounding this issue is that the security team has a tangentially different focus than the other teams, requiring way more effort in building trust and gaining equal footing at the organization.</p>
<p>This will probably be a slow process (especially if there are new hires on the team), but can be achieved with heavy focus on transparency, relying on data as truth, seeking buy-in from other teams and involving them in platform decisions and finally some good old dogged persistence.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Setting up log rotation for Mesos agents]]></title><description><![CDATA[Learn how to correctly set up logrotate for Mesos agent container sandboxes.]]></description><link>https://httgp.com/setting-up-log-rotation-for-mesos-agents/</link><guid isPermaLink="false">5d6778a57b668a10faf17bb3</guid><category><![CDATA[Mesos]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Mon, 02 Sep 2019 04:45:34 GMT</pubDate><media:content url="https://httgp.com/content/images/2019/08/mesos-agent-logrotate.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><div class="flex-centered">
    <img data-src="/content/images/2019/08/mesos-agent-logrotate.svg" alt="Setting up log rotation for Mesos agents" class="lazyload blur-up" height="180px" width="320px" src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA2NDAgMzYwIj48ZmlsdGVyIGlkPSJiIj48ZmVHYXVzc2lhbkJsdXIgc3RkRGV2aWF0aW9uPSIxMiIgLz48L2ZpbHRlcj48cGF0aCBmaWxsPSIjZTNlMWUxIiBkPSJNMCAwaDY0MHYzNjBIMHoiLz48ZyBmaWx0ZXI9InVybCgjYikiIHRyYW5zZm9ybT0ibWF0cml4KDIuNSAwIDAgMi41IDEuMyAxLjMpIiBmaWxsLW9wYWNpdHk9Ii41Ij48ZWxsaXBzZSBmaWxsPSIjMjcxMjAwIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KDI4LjcyNTg1IDM3LjczNjk3IC0yOS42NTE3MyAyMi41NzEyNiAxMjQgOTYuMikiLz48cGF0aCBmaWxsPSIjMDAwMDQ2IiBkPSJNMTM2LjIgNjEuMmwzNC43LTM2LjVMMTYxLjEgOGwtNTYuOSA1NS43eiIvPjxwYXRoIGZpbGw9IiMzZjEyMDAiIGQ9Ik03MCAxMjlsMTIzLTItODEtMzN6Ii8+PHBhdGggZmlsbD0iI2ZmZiIgZD0iTTAgMGg5MHYxNDRIMHoiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiBjeD0iMTM2IiBjeT0iMTM3IiByeD0iMjU1IiByeT0iOSIvPjxwYXRoIGZpbGw9IiNmZmYiIGQ9Ik0yNjEtMTZMMTk3LjQgNC4zIDEyOSA2Ni43IDI1NSAxNTl6Ii8+PHBhdGggZD0iTTcwIDEyOC42aDI1LjlsLTQuNy00MS43IDEyLjMgMTUuOHoiLz48cGF0aCBmaWxsPSIjZmZmIiBkPSJNMTU5LTRMLTktMTYgMSAxNTl6Ii8+PC9nPjwvc3ZnPg==">
</div>
<img src="https://httgp.com/content/images/2019/08/mesos-agent-logrotate.png" alt="Setting up log rotation for Mesos agents"><p>Continuing with my <a href="https://httgp.com/tag/mesos/">series of Mesos posts</a>, I wanted to show how to correctly configure log rotation for your Mesos agent sandboxes. The <a href="http://mesos.apache.org/documentation/latest/logging/?ref=httgp.com">official documentation on Mesos logging</a> leaves much to be desired in terms of real-world examples, so here&apos;s how I did it successfully.</p>
<h3 id="stepstosetuplogrotation">Steps to set up log rotation</h3>
<ol>
<li>Set the <code>--container-logger</code> Agent flag to <code>org_apache_mesos_LogrotateContainerLogger</code>.</li>
<li>Set the <code>--modules</code> Agent flag to <code>file:///path/your-custom-config.json</code></li>
<li>Save your <code>logrotate</code> to <code>/path/your-custom-config.json</code>.</li>
</ol>
<h4 id="example">Example</h4>
<p>In my production setup, I prefer using environment variables as they&apos;re easier to override and separate from run scripts. For any Mesos flag, you can always use an environment variable of the form <code>MESOS_SOME_FLAG</code> in lieu of a flag of the form <code>--some-flag</code>.</p>
<pre><code class="language-bash"># Default environment variables for Mesos.
# See http://mesos.apache.org/documentation/latest/configuration/master-and-agent/
cat &lt;&lt;EOF | sudo tee /etc/default/mesos
MESOS_CONTAINER_LOGGER=org_apache_mesos_LogrotateContainerLogger
MESOS_MODULES=file:///etc/mesos-agent-settings.json

EOF
</code></pre>
<p>Once logrotation is enabled, write your <a href="https://linux.die.net/man/8/logrotate?ref=httgp.com"><code>logrotate</code> configuration</a> for both <code>stdout</code> and <code>stderr</code> streams in the modules JSON file. Note that the logrotate directives need to be separated by a <code>\n</code>, as JSON strings do not support newlines.</p>
<h6 id="etcmesosagentsettingsjson"><code>/etc/mesos-agent-settings.json</code></h6>
<p>The below configuration sets up to keep the last 4 logs each of size up to 25 MB, compresses rotated logs and doesn&apos;t email old logs to any address.</p>
<pre><code class="language-bash"># Default log rotation for all agent machines.
cat &lt;&lt;EOF | sudo tee /etc/mesos-agent-settings.json
{
  &quot;libraries&quot;: [{
    &quot;file&quot;: &quot;/usr/lib/liblogrotate_container_logger.so&quot;,
    &quot;modules&quot;: [{
      &quot;name&quot;: &quot;org_apache_mesos_LogrotateContainerLogger&quot;,
      &quot;parameters&quot;: [{
        &quot;key&quot;: &quot;logrotate_stdout_options&quot;,
        &quot;value&quot;: &quot;rotate 4\nsize 25M\nmissingok\nnotifempty\ncompress\ndelaycompress\nnomail\n&quot;
      }, {
        &quot;key&quot;: &quot;logrotate_stderr_options&quot;,
        &quot;value&quot;: &quot;rotate 4\nsize 25M\nmissingok\nnotifempty\ncompress\ndelaycompress\nnomail\n&quot;
      }]
    }]
  }]
}

EOF
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Configuring Mesos Fetcher & Hadoop for AWS S3]]></title><description><![CDATA[Learn how to configure Hadoop for Mesos Fetcher so you can fetch AWS s3, s3a & s3n URIs.]]></description><link>https://httgp.com/configuring-mesos-fetcher-hadoop-for-aws-s3/</link><guid isPermaLink="false">5d6676867b668a10faf17a53</guid><category><![CDATA[Mesos]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Code]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Thu, 29 Aug 2019 06:36:12 GMT</pubDate><media:content url="https://httgp.com/content/images/2019/08/mesos-hadoop-s3-fetcher.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><div class="flex-centered">
    <img data-src="/content/images/2019/08/mesos-hadoop-s3-fetcher.svg" alt="Configuring Mesos Fetcher &amp; Hadoop for AWS S3" class="lazyload blur-up" height="180px" width="320px" src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA2NDAgMzYwIj48ZmlsdGVyIGlkPSJiIj48ZmVHYXVzc2lhbkJsdXIgc3RkRGV2aWF0aW9uPSIxMiIgLz48L2ZpbHRlcj48cGF0aCBmaWxsPSIjZTBkZGUyIiBkPSJNMCAwaDY0MHYzNjBIMHoiLz48ZyBmaWx0ZXI9InVybCgjYikiIHRyYW5zZm9ybT0ibWF0cml4KDIuNSAwIDAgMi41IDEuMyAxLjMpIiBmaWxsLW9wYWNpdHk9Ii41Ij48ZWxsaXBzZSBmaWxsPSIjOWMwMDAwIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KDMzLjk5MTk4IC0uMTIyNzkgLjEyOTY4IDM1LjkwMDAzIDIwOCA3Mi4zKSIvPjxlbGxpcHNlIGZpbGw9IiMwMDY4OWQiIGN4PSI0NyIgY3k9IjcwIiByeD0iMzYiIHJ5PSIzNiIvPjxlbGxpcHNlIGZpbGw9IiNmZmYiIHJ4PSIxIiByeT0iMSIgdHJhbnNmb3JtPSJtYXRyaXgoODAuMDg3NjYgLS45MzYyOCAuNTQ5MDQgNDYuOTY0MDMgMTMwLjMgMTI4LjcpIi8+PGVsbGlwc2UgZmlsbD0iI2ZmZiIgY3g9IjEyNiIgcng9IjY5IiByeT0iNjkiLz48ZWxsaXBzZSBmaWxsPSIjYjIyNjA3IiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KC44ODc4NyAtMjcuNjc1NDcgMzIuODg1NzQgMS4wNTUwMyAyMDguNyA2OS45KSIvPjxwYXRoIGZpbGw9IiMxODUxNjAiIGQ9Ik0zMCA2N2gzM3Y0MUgzMHoiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KC0xMC45MzAxNSAxOS43OTk5MyAtNDcuNTUxMjUgLTI2LjI0OTcgMTUuMiAxMTcuOSkiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiBjeD0iMTY3IiBjeT0iMTYiIHJ4PSIyNTUiIHJ5PSIyMCIvPjwvZz48L3N2Zz4=">
</div>
<img src="https://httgp.com/content/images/2019/08/mesos-hadoop-s3-fetcher.png" alt="Configuring Mesos Fetcher &amp; Hadoop for AWS S3"><p><a href="http://mesos.apache.org/?ref=httgp.com">Apache Mesos</a> is a &quot;distributed systems kernel&quot; that runs at a higher level of abstraction. It is essentially a cluster manager that provides resource isolation and sharing across distributed applications or frameworks.</p>
<p>My team uses Mesos heavily (yes I know, we aren&apos;t on Kubernetes <em>yet</em> because Mesos solves most of my team&apos;s problems, is pretty mature and we have a ton of experience running it at scale), and when we decided to upgrade our clusters and rewrite the infrastructure code, one of the features I was excited about using was <a href="http://mesos.apache.org/documentation/latest/fetcher/?ref=httgp.com">Fetcher</a>.</p>
<h3 id="deploymentworkflow">Deployment workflow</h3>
<p>We use <a href="https://mesosphere.github.io/marathon/?ref=httgp.com">Marathon</a> as our main orchestration platform. It lets us run bundled apps and Docker images on our Mesos cluster and makes it very easy to scale them up or down.</p>
<p>Our usual pattern of deploying apps is &#x2014;</p>
<ol>
<li>CI builds a Docker image of the app or bundles it up as an archive and pushes it to AWS S3.</li>
<li>CD pushes a configuration JSON to Marathon with the <code>--force-deploy</code> flag.</li>
</ol>
<p>For Docker images, the Marathon configuration is straightforward. For bundled archives, the <code>cmd</code> directive on Marathon is auto-generated by the CD pipeline to look like this  &#x2014;</p>
<pre><code>s3cmd get s3://app-name/app.tgz &amp;&amp; tar -xzf *.tgz &amp;&amp; ./bin/start.sh
</code></pre>
<p>This meant the overall <code>cmd</code> directive was just awkward to look at. Also, <code>s3cmd</code> had to be installed on all of our Mesos agents; <code>s3cmd</code> has its own set of weird issues and it is very difficult to upgrade packages on all of the Mesos agent infrastructure.</p>
<h3 id="entermesosfetcher">Enter Mesos Fetcher</h3>
<p>Mesos Fetcher natively supports fetching resources from HTTP, HTTPS, FTP &amp; FTPS URIs. Additionally, it supports caching (huge wins in deploy time and S3 transfer costs if you deploy a lot) and auto-extraction (see <a href="http://mesos.apache.org/documentation/latest/fetcher/?ref=httgp.com#archive-extraction">supported formats</a>) of resources. If a local Hadoop client is installed, it can also fetch resources from HDFS &amp; S3; this last bit is what we&apos;re interested in.</p>
<h4 id="installinghadoop">Installing Hadoop</h4>
<p>To support fetching S3 URIs, let&apos;s first install Hadoop on our Mesos agent and set it up so it is accesible inside the container sandbox. Ideally, this should be baked in your AMI for Mesos agent machines.</p>
<p>The code is heavily documented inline, explaining why we do each of these steps.</p>
<pre><code class="language-bash">#!/bin/bash
# Installation &amp; configuration of Hadoop &amp; Hadoop AWS Tools for Mesos agent.
# Battle-tested on Ubuntu 18.04.

set -e

export HADOOP_VERSION=&quot;3.2.0&quot; # See https://hadoop.apache.org/releases.html for latest version
export BASE_DIR=&quot;/apps&quot;
export HADOOP_APP_DIR=&quot;${BASE_DIR}/hadoop-${HADOOP_VERSION}&quot;
export HADOOP_LOG_DIR=&quot;/logs/hadoop&quot;
export MESOS_FETCHER_CACHE_DIR=&quot;/data/cache/mesos&quot;

sudo mkdir -p ${HADOOP_APP_DIR} ${HADOOP_LOG_DIR} ${MESOS_FETCHER_CACHE_DIR}

# Download &amp; extract Hadoop binary release.
cd ${HADOOP_APP_DIR}
    # Mirrors are at https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz
    sudo wget -O hadoop.tar.gz http://mirrors.estointernet.in/apache/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz
    sudo tar -xzf hadoop.tar.gz --strip-components=1 -C .
    sudo rm hadoop.tar.gz
    # Cleanup unneccessary items to keep the Hadoop installation lean.
    sudo rm -rf *.txt
    sudo rm -rf share/doc
cd -

# Set up $JAVA_HOME for Hadoop.
# This locates the JAVA_HOME and updates it in Hadoop&apos;s environment file.
export JAVA_HOME=$(readlink -f /usr/bin/java | sed &quot;s:bin/java::&quot;)
sudo sed -i &quot;s@# export JAVA_HOME=.*@export JAVA_HOME=${JAVA_HOME}@g&quot; ${HADOOP_APP_DIR}/etc/hadoop/hadoop-env.sh

# Turn on Hadoop-AWS optional tools.
# We need this to be able to fetch s3://, s3a:// and s3n:// URIs.
sudo sed -i &quot;s@# export HADOOP_OPTIONAL_TOOLS=.*@export HADOOP_OPTIONAL_TOOLS=\&quot;hadoop-aws\&quot;@g&quot; ${HADOOP_APP_DIR}/etc/hadoop/hadoop-env.sh

# Ask Hadoop to load the AWS SDK.
# This assumes &apos;/root&apos; is $HOME.
# If the default user in your Marathon is not &apos;root&apos; (as it should be), change this to the home directory of that user.
cat &lt;&lt;EOF | sudo tee /root/.hadooprc
hadoop_add_to_classpath_tools hadoop-aws

EOF

# Add Hadoop to $PATH and customize a few parameters.
# $PATH is available inside each container sandbox, making fetcher work.
# HADOOP_HOME will be picked up by Mesos in place of the --hadoop-home agent flag.
cat &lt;&lt;EOF | sudo tee /etc/profile.d/A00-add-hadoop.sh
export PATH=&quot;$PATH:${HADOOP_APP_DIR}/bin&quot;
export HADOOP_HOME=&quot;${HADOOP_APP_DIR}&quot;
export HADOOP_LOG_DIR=&quot;${HADOOP_LOG_DIR}&quot;
export HADOOP_ROOT_LOGGER=WARN,console

EOF

# Set up Hadoop executables to be discoverable for all users.
# This is so that just running &apos;hadoop&apos; will work for everyone.
sudo sed -i -e &quot;/ENV_SUPATH/ s[=.*[&amp;:${HADOOP_APP_DIR}/bin[&quot; /etc/login.defs
sudo sed -i -e &quot;/ENV_PATH/ s[=.*[&amp;:${HADOOP_APP_DIR}/bin[&quot; /etc/login.defs

</code></pre>
<h4 id="additionalagentconfiguration">Additional Agent configuration</h4>
<p>Once the Hadoop is set up, make sure to set sensible values for these 2 <a href="http://mesos.apache.org/documentation/latest/configuration/agent/?ref=httgp.com#optional-flags">Agent configuration flags</a> &#x2014;</p>
<ol>
<li><code>--fetcher-cache-size</code> flag or <code>MESOS_FETCHER_CACHE_SIZE</code> environment variable.</li>
<li><code>--fetcher-cache-dir</code> flag or <code>MESOS_FETCHER_CACHE_DIR</code> environment variable.</li>
</ol>
<p>For example &#x2014;</p>
<pre><code class="language-properties">MESOS_FETCHER_CACHE_SIZE=1GB
MESOS_FETCHER_CACHE_DIR=/data/cache/mesos
</code></pre>
<h3 id="marathonconfiguration">Marathon configuration</h3>
<p>Once the agents are deployed with the new Fetcher &amp; Hadoop setup done, you can change your Marathon deployment configuration from &#x2014;</p>
<pre><code class="language-json">{
  &quot;cmd&quot;: &quot;s3cmd get s3://app-name/app.tgz &amp;&amp; tar -xzf *.tgz &amp;&amp; ./bin/start.sh&quot;
}
</code></pre>
<p>to &#x2014;</p>
<pre><code class="language-json">{
  &quot;fetch&quot;: {
    &quot;uri&quot;: &quot;s3a://app-name/app.tgz&quot;,
    &quot;extract&quot;: true,
    &quot;cache&quot;: true
  }
}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Serializing & deserializing Thrift data in Node.js]]></title><description><![CDATA[Learn how to properly serialize & deserialize (serde) Thrift data in Node.js, generate TypeScript definitions and handle unsupported data types like Int64.]]></description><link>https://httgp.com/deserializing-thrift-data-in-node-js/</link><guid isPermaLink="false">5d64d5997c1f273dc3912384</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Code]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Tue, 27 Aug 2019 11:39:05 GMT</pubDate><media:content url="https://httgp.com/content/images/2019/08/thrift-serde-nodejs.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><div class="flex-centered">
    <img data-src="/content/images/2019/08/thrift-serde-nodejs-1.svg" alt="Serializing &amp; deserializing Thrift data in Node.js" class="lazyload blur-up" height="180px" width="320px" src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAzMjAgMTgwIj48ZmlsdGVyIGlkPSJiIj48ZmVHYXVzc2lhbkJsdXIgc3RkRGV2aWF0aW9uPSIxMiIgLz48L2ZpbHRlcj48cGF0aCBmaWxsPSIjZWZmMWViIiBkPSJNMCAwaDMyMHYxODBIMHoiLz48ZyBmaWx0ZXI9InVybCgjYikiIHRyYW5zZm9ybT0ibWF0cml4KDEuMjUgMCAwIDEuMjUgLjYgLjYpIiBmaWxsLW9wYWNpdHk9Ii41Ij48cGF0aCBmaWxsPSIjMDA5NjAwIiBkPSJNNjcgMTAwaDMzdjMzSDY3eiIvPjxlbGxpcHNlIGZpbGw9InJlZCIgY3g9IjE3OSIgY3k9IjExNiIgcng9IjE3IiByeT0iMTciLz48ZWxsaXBzZSBmaWxsPSIjMDA1YzAwIiBjeD0iMTI5IiBjeT0iNzkiIHJ4PSIxNyIgcnk9IjE2Ii8+PHBhdGggZmlsbD0iIzAwYzQwMCIgZD0iTTY2IDk5LjVoMzJ2MzNINjZ6Ii8+PHBhdGggZmlsbD0iI2ZmYmEwMCIgZD0iTTEzMS4zIDEwTDExMCAzN2wxNC41IDYuMiAyMy40LTUuOHoiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KDQ3LjA3OTQgMTMuMjM2MjUgLTQyLjUxMiAxNTEuMjA4OTQgMzEuNCA1OSkiLz48cGF0aCBmaWxsPSIjMzYyY2U2IiBkPSJNMTc0LjEgNDlsLTI3LjUtMjYuNUwxODYgNjguOGwtNi4yIDIyLjZ6Ii8+PGVsbGlwc2UgZmlsbD0iI2Y4MDAxOCIgY3g9IjE3OSIgY3k9IjExNyIgcng9IjE2IiByeT0iMTYiLz48L2c+PC9zdmc+">
</div>
<img src="https://httgp.com/content/images/2019/08/thrift-serde-nodejs.png" alt="Serializing &amp; deserializing Thrift data in Node.js"><p><a href="https://thrift.apache.org/?ref=httgp.com">Apache Thrift</a> allows writing an interoperable type-safe software stack. It comes with a code generation system that has its own definition language that can be converted to code across many programming languages.</p>
<p>As an example, you can write a <code>User</code> data structure that looks like this in Thrift and it can be used to auto-generate code in any language of your choice.</p>
<pre><code class="language-thrift">#User.thrift
namespace java com.httgp.models.thrift

struct User {
   1: required i16 id
   2: required string name
   3: optional string nickname
}
</code></pre>
<p>The auto-generated TypeScript definition for this Thrift model looks like this &#x2014;</p>
<pre><code class="language-typescript">// User.ts
export interface IUserArgs {
    id: number;
    name: string;
    nickname?: string;
}

export class User {
    public id: number;
    public name: string;
    public nickname?: string;
    constructor(args: IUserArgs) {
        if (args != null &amp;&amp; args.id != null) {
            this.id = args.id;
        } else {
            throw new thrift.Thrift.TProtocolException(thrift.Thrift.TProtocolExceptionType.UNKNOWN, &quot;Required field[id] is unset!&quot;);
        }
        if (args != null &amp;&amp; args.name != null) {
            this.name = args.name;
        } else {
            throw new thrift.Thrift.TProtocolException(thrift.Thrift.TProtocolExceptionType.UNKNOWN, &quot;Required field[name] is unset!&quot;);
        }
        if (args != null &amp;&amp; args.nickname != null) {
            this.nickname = args.nickname;
        }
    }
    // ...
}
</code></pre>
<p>The real benefit of using Thrift becomes obvious when you try to pass this data on to another service that is written in a different language. In this post, I will talk about how to do Thrift <abbr title="Serialization/Deserialization">serde</abbr> in Node.js and some common patterns.</p>
<h3 id="dependencies">Dependencies</h3>
<ol>
<li>
<p><a href="https://www.npmjs.com/package/thrift?ref=httgp.com">thrift</a> for serialization &amp; deserialization.</p>
</li>
<li>
<p><a href="https://www.npmjs.com/package/node-int64?ref=httgp.com">node-int64</a> to handle 64-bit <code>Int</code>s.</p>
</li>
<li>
<p>Optionally, <a href="https://www.npmjs.com/package/@creditkarma/thrift-typescript?ref=httgp.com">@creditkarma/thrift-typescript</a> for generating TypeScript definitions from your <code>.thrift</code> files. You can simply run &#x2014;</p>
</li>
</ol>
<pre><code class="language-bash">thrift-typescript --outDir definitions User.thrift
</code></pre>
<h3 id="readingwritingthriftdatainnodejs">Reading &amp; writing Thrift data in Node.js</h3>
<p>The Thrift documentation is quite sparse when it comes to their language-specific implementations, and after a lot of trial and error, here&apos;s how I managed to deserialize Thrift data in Node.js.</p>
<p>If you&apos;re consuming Thrift-serialized data (say, from a Kafka topic), the data is probably available in Node.js as a <code>Buffer</code>. The <code>deserializeThrift</code> method shows how to deserialize it &#x2014;</p>
<pre><code class="language-typescript">import { TFramedTransport, TBinaryProtocol } from &apos;thrift&apos;;
import { User } from &apos;./definitions/User&apos;; // Generated using @creditkarma/thrift-typescript

/**
 * Serializes native data of given model into Thrift.
 * @param data Data to serialize.
 * @param thriftModel Thrift model.
 */
function serializeThrift(data: object, thriftModel: any): any {
    const buffer = Buffer.from(JSON.stringify(data));
    const tTransport = new TFramedTransport(buffer);
    const tProtocol = new TBinaryProtocol(tTransport);
    const serializedData = data.write(binaryProt);
    return serializedData;
}

/**
 * Deserializes Thrift data with given model.
 * @param data Thrift data.
 * @param thriftModel Thrift model.
 */
function deserializeThrift(data: Buffer, thriftModel: any): any {
    const tTransport = new TFramedTransport(data);
    const tProtocol = new TBinaryProtocol(tTransport);
    const deserializedData = thriftModel.read(tProtocol);
    return deserializedData;
}
</code></pre>
<h6 id="deserialization">Deserialization</h6>
<pre><code class="language-typescript">// rawData = getFromExternalDataSource(...);
const userObject = &lt;User&gt;deserializeThrift(rawData, User);
console.log(userObject);
// { id: 1, name: &apos;Ganesh&apos;, nickname: &apos;GP&apos; }
</code></pre>
<h6 id="serialization">Serialization</h6>
<pre><code class="language-typescript">const userData: User = { id: 1, name: &apos;Ganesh&apos;, nickname: &apos;GP&apos; };
const userDataAsThrift = serializeThrift(userData, User);
// Now you can write userDataAsThrift to your output sink (like a Kafka topic).
</code></pre>
<h3 id="handlingint64valuesinjson">Handling <code>Int64</code> values in JSON</h3>
<p>While other languages have 64-bit <code>Int</code>s, JavaScript&apos;s <code>Number</code> supports only <a href="http://steve.hollasch.net/cgindex/coding/ieeefloat.html?ref=httgp.com">IEEE 754 double-precision floats</a>, which are limited to 53 bits. The <code>node-int64</code> package helps in handling them seamlessly by returning a custom <code>Int64</code> object. However, if you wish to convert the Thrift-deserialized JSON into anything else, you&apos;ll need to manually handle <code>Int64</code>.</p>
<p>Fortunately, <code>JSON.stringify()</code> takes a <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify?ref=httgp.com#The_replacer_parameter">&quot;replacer&quot; parameter</a> that you can use to modify its default behaviour.</p>
<pre><code class="language-typescript">/**
 * Custom JSON stringify replacer.
 *
 * Converts `Int64` to `Number`. Returns same value if it isn&apos;t `Int64`.
 * NOTE: Won&apos;t be precise for VERY large numbers.
 */
function customStringifier(key: string, value: any): Number | any {
    if (value instanceof Int64) {
        return value.toNumber();
    } else {
        return value;
    }
}

// Convert deserialized object to a String &#x2014;
JSON.stringify(deserializedObject, customStringifier);
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building single binaries for your Node.js projects]]></title><description><![CDATA[Learn how to ship your Node.js app as a single executable binary.]]></description><link>https://httgp.com/building-single-binaries-for-your-node-js-apps/</link><guid isPermaLink="false">5d5a84c9544b6f05a9134d42</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Javascript]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Fri, 23 Aug 2019 19:01:04 GMT</pubDate><media:content url="https://httgp.com/content/images/2019/08/nodejs-binary-preview-image-1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><div class="flex-centered">
    <img data-src="/content/images/2019/08/nodejs-binary.svg" alt="Building single binaries for your Node.js projects" class="lazyload blur-up" height="180px" width="320px" src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAzMjAgMTgwIj48ZmlsdGVyIGlkPSJiIj48ZmVHYXVzc2lhbkJsdXIgc3RkRGV2aWF0aW9uPSIxMiIgLz48L2ZpbHRlcj48cGF0aCBmaWxsPSIjZGFkY2YzIiBkPSJNMCAwaDMyMHYxODBIMHoiLz48ZyBmaWx0ZXI9InVybCgjYikiIHRyYW5zZm9ybT0ibWF0cml4KDEuMjUgMCAwIDEuMjUgLjYgLjYpIiBmaWxsLW9wYWNpdHk9Ii41Ij48ZWxsaXBzZSBmaWxsPSIjNTM1NGQzIiBjeD0iMTM4IiBjeT0iNzEiIHJ4PSI2MiIgcnk9IjU0Ii8+PGVsbGlwc2UgZmlsbD0iIzAwNWEyYyIgcng9IjEiIHJ5PSIxIiB0cmFuc2Zvcm09Im1hdHJpeCgtNy4wNjQwNSAtMjEuNjA3MDEgMTkuNDI4MzMgLTYuMzUxNzcgODIuMyAxMDYuNikiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0ibWF0cml4KC00Mi4yMDQwOSAtNC4yMTI1MyAxMy4wNzQyMiAtMTMwLjk4NjgyIDM2LjQgMzkuNCkiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiBjeD0iMjI5IiBjeT0iNzAiIHJ4PSIzNiIgcnk9IjI1NSIvPjxlbGxpcHNlIGZpbGw9IiNmZmYiIHJ4PSIxIiByeT0iMSIgdHJhbnNmb3JtPSJtYXRyaXgoLTc5LjI5MzM2IDMxLjQxMzY1IC03Ljg0ODU0IC0xOS44MTEwNCAxNzIuOSAxMzMuNSkiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiByeD0iMSIgcnk9IjEiIHRyYW5zZm9ybT0icm90YXRlKC0yNS45IDE0OS4xIC0yODUuMykgc2NhbGUoNDYuMTY1MDMgNC44OTA0NCkiLz48ZWxsaXBzZSBmaWxsPSIjZmZmIiBjeD0iMTI4IiBjeT0iNiIgcng9IjI1NSIgcnk9IjEyIi8+PGVsbGlwc2UgZmlsbD0iIzJjYjMwZSIgY3g9Ijg3IiBjeT0iMTA1IiByeD0iMTYiIHJ5PSIxNiIvPjwvZz48L3N2Zz4=">
</div>
<img src="https://httgp.com/content/images/2019/08/nodejs-binary-preview-image-1.png" alt="Building single binaries for your Node.js projects"><p>A lot of programming languages and runtimes (like Go, Rust &amp; C/C++) offer self-contained distributable binaries, and as someone that spends most of their time on Node.js, I&apos;ve been quite jealous of them... until now.</p>
<p><a href="https://github.com/criblio/js2bin?ref=httgp.com#how-it-works">js2bin</a> aims to build single binaries for Node.js projects on Linux and macOS. Although similar projects like <code>nexe</code> and <code>pkg</code> exist, <code>js2bin</code> is slightly different (IMHO, better) in how it works; in the <a href="https://blog.cribl.io/2019/07/08/going-native/?ref=httgp.com">words of the author</a> &#x2014;</p>
<blockquote>
<p>They <em>(nexe and pkg)</em> both embed users application and other content by appending to the executable &#x2013; while this method works, it can also trip malware scanners as the executable contains extraneous content.</p>
</blockquote>
<h3 id="howtousejs2bin">How to use <code>js2bin</code></h3>
<p>To be able to use <code>js2bin</code>, you need to first bundle your Node.js application into a single JS file. You can try using webpack or rollup, but I found a much simpler solution in <a href="https://zeit.co/blog/ncc?ref=httgp.com">ncc</a>.</p>
<p><code>ncc</code> supports TypeScript out of the box too, so all you have to do to is run &#x2014;</p>
<pre><code>ncc build src/index.ts -o bundle
</code></pre>
<p><code>src/index.ts</code> is your main entrypoint, and <code>bundle</code> is the output directory.</p>
<p>Once you have a single JS file, you can then run &#x2014;</p>
<pre><code>js2bin --cache --build --platform=darwin --platform=linux --platform=windows --app=bundle/index.js
</code></pre>
<p>This creates binaries for macOS, Linux and Windows using the bundled JS you created in the previous step.</p>
<h3 id="caveats">&#x1F6A9; Caveats</h3>
<p><code>js2bin</code> does not work with projects that have native modules. To quickly check if your project uses native modules, run &#x2014;</p>
<pre><code>find node_modules -type f -name &quot;*.node&quot;
</code></pre>
<h3 id="realworldexample">Real-world example</h3>
<p>I used <code>js2bin</code> to generate executables and a CD workflow to automatically attach them to releases on a hobby project of mine; here&apos;s the key pieces of interest &#x2014;</p>
<ol>
<li><a href="https://github.com/paambaati/websight/blob/e8b1aaef569812cd76425b335dbea39fca628f87/package.json?ref=httgp.com#L25-L26"><code>npm</code> scripts</a></li>
<li><a href="https://github.com/paambaati/websight/blob/e8b1aaef569812cd76425b335dbea39fca628f87/.github/workflows/cd.yml?ref=httgp.com#L15-L35">GitHub Actions CD workflow</a></li>
<li><a href="https://github.com/paambaati/websight/releases?ref=httgp.com">Releases page</a> with binaries as release attachments.</li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Converting GitHub Actions from Docker to JavaScript]]></title><description><![CDATA[Learn how to convert a Docker-based GitHub Action to a JavaScript/TypeScript-based Action.]]></description><link>https://httgp.com/converting-github-actions-from-docker-to-javascript/</link><guid isPermaLink="false">5d57dfc8544b6f05a9134d2f</guid><category><![CDATA[GitHub]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Code]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Sun, 18 Aug 2019 11:55:45 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>
    <div class="toast toast-success">
        <i class="icon icon-message"></i>
        The full source for this Action is available at <a href="https://github.com/paambaati/codeclimate-action?ref=httgp.com">paambaati/codeclimate-action</a>.
    </div>
</p>
<p>GitHub Actions are the latest big thing in the world of CI/CD. As GitHub is slowly inching towards an &#x201C;eat the whole world&#x201D; monopoly (see <a href="https://github.com/features/package-registry?ref=httgp.com">package registry</a>), they&#x2019;ve also launched a very compelling CI/CD automation feature called <a href="https://github.blog/2019-08-08-github-actions-now-supports-ci-cd/?ref=httgp.com">Actions</a> that let you define your own custom workflows for your GitHub repositories. They&#x2019;re currently in public beta and will be generally available by November 13.</p>
<p>The <a href="https://help.github.com/en/articles/about-github-actions?ref=httgp.com">Actions documentation</a> is pretty great, and I&#x2019;d recommend reading it to understand how to use them.</p>
<p>There are 2 types of Actions - Docker based and JavaScript based. While each have their pros and cons (see <a href="https://help.github.com/en/articles/about-actions?ref=httgp.com#types-of-actions">&quot;Types of actions&quot;</a>), the summary of it is &#x2014; JavaScript Actions run on <strong>all platforms</strong> (Linux, macOS &amp; Windows) and they&#x2019;re <strong>faster</strong> than Docker-based Actions.</p>
<p>I&#x2019;d recommend writing JavaScript Actions if your workflow doesn&#x2019;t need specific versions of tools, dependencies or platforms. So without much further ado, here&#x2019;s how I rewrote a Docker-based action to JavaScript/TypeScript.</p>
<h2 id="fromdockertojavascript">From Docker to JavaScript</h2>
<p>Recently, I published an action that uploads your code coverage results to Code Climate. The first version was based on Docker, and here&apos;s how it looked &#x2014;</p>
<h5 id="dockerfile"><code>Dockerfile</code></h5>
<pre><code class="language-docker">FROM node:lts-alpine

LABEL version=&quot;1.0.0&quot;
LABEL repository=&quot;http://github.com/paambaati/codeclimate-action&quot;
LABEL homepage=&quot;http://github.com/paambaati/codeclimate-action&quot;
LABEL maintainer=&quot;GP &lt;me@httgp.com&gt;&quot;

LABEL com.github.actions.name=&quot;Code Climate Action&quot;
LABEL com.github.actions.description=&quot;Publish code coverage to Code Climate&quot;
LABEL com.github.actions.icon=&quot;code&quot;
LABEL com.github.actions.color=&quot;gray-dark&quot;

RUN apk add --no-cache python make g++ curl

COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT [ &quot;/entrypoint.sh&quot; ]
CMD [ &quot;yarn coverage&quot; ]
</code></pre>
<h5 id="entrypointsh"><code>entrypoint.sh</code></h5>
<pre><code class="language-bash">#!/bin/bash

set -eu

curl -L https://codeclimate.com/downloads/test-reporter/test-reporter-latest-linux-amd64 &gt; ./cc-test-reporter
chmod +x ./cc-test-reporter
./cc-test-reporter before-build

bash -c &quot;$*&quot;

./cc-test-reporter after-build --exit-code $?
</code></pre>
<h3 id="howthisworks">How this works</h3>
<p>The Dockerfile includes all the metadata for GitHub Actions with the <code>LABEL</code> directives, and includes an <code>ENTRYPOINT</code> script. The entrypoint script downloads the Code Climate reporter and then runs it before and after the actual coverage command.</p>
<h3 id="motivationforrewritinginjavascript">Motivation for rewriting in JavaScript</h3>
<p>I realized that the Docker-based Action could only be run on Linux. If I wanted my Action to be used on all platforms, I had to write this as a JavaScript action. As a bonus, running times would also reduce (<abbr title="Your mileage may vary">YMMV</abbr>, but after the rewrite, running time for a coverage task fell from 26 seconds to a whopping 9 seconds!)</p>
<p>To rewrite it in JavaScript, I mostly followed the <a href="https://github.com/actions/toolkit/blob/master/docs/javascript-action.md?ref=httgp.com">official documentation</a>. It also includes a handy template repository if you want to get started with a JS/TS action right away.</p>
<p>Every Action repository needs an <code>action.yml</code> file. You can read more about <a href="https://help.github.com/en/articles/metadata-syntax-for-github-actions?ref=httgp.com">all the available metadata syntax</a>.</p>
<h5 id="actionyml"><code>action.yml</code></h5>
<pre><code class="language-yaml">name: &apos;Code Climate Action&apos;
description: &apos;Publish code coverage to Code Climate&apos;
author: &apos;GP &lt;me@httgp.com&gt;&apos;
branding:
  icon: &apos;code&apos;
  color: &apos;gray-dark&apos;
inputs:
  coverageCommand:
    description: &apos;Coverage command to execute&apos;
    default: &apos;yarn coverage&apos;
runs:
  using: &apos;node12&apos;
  main: &apos;lib/main.js&apos;
</code></pre>
<p><code>lib/main.js</code> includes all the Action&apos;s logic. GitHub already maintains npm packages for most basic tasks like <a href="https://www.npmjs.com/package/@actions/core?ref=httgp.com">logging</a>, <a href="https://www.npmjs.com/package/@actions/exec?ref=httgp.com">executing commands</a>, <a href="https://www.npmjs.com/package/@actions/io?ref=httgp.com">filesystem services</a> &amp; <a href="https://www.npmjs.com/package/@actions/github?ref=httgp.com">using the GitHub API</a>.</p>
<p>Using these, here&apos;s my main code &#x2014;</p>
<h5 id="maints"><code>main.ts</code></h5>
<pre><code class="language-typescript">import { platform } from &apos;os&apos;;
import { createWriteStream } from &apos;fs&apos;;
import fetch from &apos;node-fetch&apos;;
import { debug, getInput } from &apos;@actions/core&apos;;
import { exec } from &apos;@actions/exec&apos;;

const DOWNLOAD_URL = `https://codeclimate.com/downloads/test-reporter/test-reporter-latest-${platform()}-amd64`;
const EXECUTABLE = &apos;./cc-reporter&apos;;
const DEFAULT_COVERAGE_COMMAND = &apos;yarn coverage&apos;;

export function downloadToFile(url: string, file: string, mode: number = 0o755): Promise&lt;void&gt; {
    return new Promise(async (resolve, reject) =&gt; {
        try {
            const response = await fetch(url, { timeout: 2 * 60 * 1000 }); // Timeout in 2 minutes.
            const writer = createWriteStream(file, { mode });
            response.body.pipe(writer);
            writer.on(&apos;close&apos;, () =&gt; {
                return resolve();
            });
        } catch (err) {
            return reject(err);
        }
    });
}

export function run(downloadUrl = DOWNLOAD_URL, executable = EXECUTABLE, coverageCommand = DEFAULT_COVERAGE_COMMAND): Promise&lt;void&gt; {
    return new Promise(async (resolve, reject) =&gt; {
        await downloadToFile(downloadUrl, executable);
        await exec(executable, [&apos;before-build&apos;]);
        await exec(coverageCommand);
        await exec(executable, [&apos;after-build&apos;, &apos;--exit-code&apos;, lastExitCode.toString()]);
        debug(&apos;Coverage uploaded!&apos;);
        return resolve();
    });
}

const coverageCommand = getInput(&apos;coverageCommand&apos;, { required: false });
run(DOWNLOAD_URL, EXECUTABLE, coverageCommand);
</code></pre>
<h3 id="publishingtotheactionsmarketplace">Publishing to the Actions Marketplace</h3>
<p>To <a href="https://github.com/actions/toolkit/blob/master/docs/javascript-action.md?ref=httgp.com#publish-a-v1-release-action">publish an Action</a>, there are a few manual steps &#x2014;</p>
<ol>
<li>Check in your built files (if any).</li>
<li>Check in your <code>node_modules</code> (&#x1F6A9;if there are native modules in your dependency tree, you&#x2019;d be better off writing a Docker-based action).</li>
<li>Remove development dependencies.</li>
<li>Version them via release branches or better yet, tags.</li>
</ol>
<p>To make these steps easier, I&#x2019;ve written a simple bash script &#x2014;</p>
<h5 id="releasesh"><code>release.sh</code></h5>
<pre><code class="language-bash">#!/bin/bash

set -e

# Check if we&apos;re on master first.
git_branch=$(git rev-parse --abbrev-ref HEAD)
if [ &quot;$git_branch&quot; == &quot;master&quot; ]; then
    echo &quot;Cannot release from &apos;master&apos; branch. Please checkout to a release branch!&quot;
    echo &quot;Example: git checkout -b v1-release&quot;
    exit 1
fi

# Install dependencies and build &amp; test.
npm install
npm test
npm run build

# Build &amp; tests successful. Now keep only production deps.
npm prune --production

# Force add built files and deps.
git add --force lib/ node_modules/
git commit -a -m &quot;Publishing $git_branch&quot;
git push -u origin $git_branch

# Set up release tag.
read -p &quot;Enter tag (example: v1.0.0) &quot; git_tag
git push origin &quot;:refs/tags/$git_tag&quot;
git tag -fa &quot;$git_tag&quot; -m &quot;Release $git_tag&quot;
git push -u origin $git_tag
git push --tags

echo &quot;Done!&quot;
git_repo=&quot;$(git config --get remote.origin.url | cut -d &apos;:&apos; -f2 | sed &quot;s/.git//&quot;)&quot;
echo &quot;You can now use this action with $git_repo@$git_tag&quot;
</code></pre>
<p>It automates all of these steps with only a prompt for the release tag version. Once run, you will see that there&apos;s a new release in the repository under the Releases tab. When you edit it, you&apos;re presented with the option to publish your action to the <a href="https://github.com/marketplace?type=actions&amp;ref=httgp.com">Actions Marketplace</a> &#x2014;</p>
<div class="flex-centered">
    <figure class="figure">
        <img src="https://httgp.com/content/images/2019/08/Screenshot-2019-08-18-at-5.19.39-PM.png" alt="Publish release to GitHub Actions Marketplace" width="781" height="667">
    </figure>
</div>
<h3 id="usingtheaction">Using the Action</h3>
<p>After the action is published, users can start using it like this &#x2014;</p>
<h5 id="yourownworkflowyml"><code>your-own-workflow.yml</code></h5>
<pre><code class="language-yaml">steps:
- name: Test &amp; publish code coverage
  uses: paambaati/codeclimate-action@v2.1.0
  env:
    CC_TEST_REPORTER_ID: &lt;code_climate_reporter_id&gt;
  with:
    coverageCommand: npm run coverage
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Configuring nyc, tape and ts-node]]></title><description><![CDATA[How to correctly configure nyc, tape and ts-node for fast and pain-free testing of your TypeScript codebase.]]></description><link>https://httgp.com/configuring-nyc-tape-and-typescript/</link><guid isPermaLink="false">5d4bedc2544b6f05a9134d23</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[Code]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Thu, 08 Aug 2019 10:04:21 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve been writing a lot of TypeScript in the last few years, and have come to love it for writing my Node.js projects.</p>
<h3 id="myworkflow">My workflow</h3>
<p>For tests, I religiously use <a href="https://github.com/substack/tape?ref=httgp.com"><code>tape</code></a> because unlike <a href="https://mochajs.org/?ref=httgp.com"><code>mocha</code></a>, it does not need a test-runner &#x2014; <code>tape</code> tests can be run as regular Node modules. <code>tape</code> also doesn&apos;t pollute <code>global</code> with &quot;magic&quot; functions, which <code>mocha</code> does with abandon.</p>
<p><a href="https://github.com/istanbuljs/nyc?ref=httgp.com"><code>nyc</code></a> is <em>pretty much</em> the only choice for code coverage in Node-land. It is a CLI-wrapper around <code>instanbuljs</code> and is regularly maintained.</p>
<p>I prefer to use <a href="https://github.com/TypeStrong/ts-node?ref=httgp.com"><code>ts-node</code></a> for tests as it lets me rapidly test without having to run <code>tsc</code> every time to transpile tests to JS.</p>
<h3 id="theproblem">The problem</h3>
<p>One of the biggest pain points in using this combination is correctly configuring them.</p>
<p>Most of the folks that use these same tools run into weird coverage issues (inconsistent results across runs, 0% coverage, runtime errors. etc.)</p>
<p>A Google search lands  in <a href="https://github.com/istanbuljs/nyc/issues/497?ref=httgp.com">istanbuljs/nyc/issues #497</a>, which has conflicting information. Not to mention the officially recommended config preset <code>nyc-config-typescript</code> had an <a href="https://github.com/istanbuljs/nyc/issues/1148?ref=httgp.com">issue that was only <em>just</em> fixed</a>.</p>
<h3 id="thesolution">The solution</h3>
<p>After a lot of trial and error, here&apos;s what works best.</p>
<ol>
<li>
<p><code>nyc</code> config &#x2014; I&apos;d recommend saving this in <code>.nycrc.json</code> in your project&apos;s root directory. Saving it as a <code>.json</code> file gives you better syntax highlighting in the editor of your choice.</p>
<pre><code class="language-json">{
    &quot;extension&quot;: [
        &quot;.ts&quot;
    ],
    &quot;require&quot;: [
        &quot;ts-node/register/transpile-only&quot;
    ],
    &quot;exclude&quot;: [
        &quot;**/*.d.ts&quot;,
        &quot;coverage/&quot;,
        &quot;test/&quot;
    ],
    &quot;reporter&quot;: [
        &quot;text&quot;,
        &quot;lcov&quot;
    ],
    &quot;cache&quot;: false
}

</code></pre>
</li>
<li>
<p><code>npm</code> scripts &#x2014;</p>
<pre><code class="language-json">{
    &quot;scripts&quot;: {
        &quot;test&quot;: &quot;tape -r ts-node/register/transpile-only test/**/*.test.ts test/*.test.ts&quot;,
        &quot;coverage&quot;: &quot;nyc tape -r ts-node/register/transpile-only test/**/*.test.ts  test/*.test.ts&quot;,
    }
}
</code></pre>
</li>
</ol>
<p>You can replace <code>ts-node/register/transpile-only</code> with <code>ts-node/register</code> if you run into unexpected issues, but in most cases it works and is ~50% faster in my experience.</p>
<p>Happy testing! &#x1F973;</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[How to interview a Senior Engineer]]></title><description><![CDATA[How to best interview senior engineers for your team or organization — built from personal experiences, this article talks about how to interview talent, and how not to.]]></description><link>https://httgp.com/how-to-interview-a-senior-engineer/</link><guid isPermaLink="false">5d3ff61f544b6f05a9134d18</guid><category><![CDATA[Lessons]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Tue, 30 Jul 2019 08:06:17 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>For the past few years, I&apos;ve been interviewing a lot of engineers and helping improve and streamline the hiring process. I&apos;ve also had multiple job interviews myself across the world, and I wanted to just write about what, to me, is a good way to hire great talent.</p>
<h3 id="writeathoughtfuljoblisting">Write a thoughtful job listing</h3>
<p>First impressions are important, so here&apos;s how you nail a great job listing.</p>
<h4 id="includerolesresponsibilities">Include roles &amp; responsibilities</h4>
<p>The first thing that most candidates look for is the role and what it offers; it helps to emphasize the keywords (for example, &quot;senior frontend lead&quot; or &quot;principal software engineer&quot;). Also helpful is what is expected of the role in terms of responsibilities - they work with product teams, they&apos;re expected to interface with the CTO, they&apos;re expected to contribute to the organization&apos;s OSS projects, etc.</p>
<h4 id="includebenefitsperks">Include benefits &amp; perks</h4>
<p>This is a no-brainer. Include team activities &amp; outings (pictures too if you can), food/catering options, yoga/gym memberships, conference tickets and discounts that employees can get.</p>
<p>Equally important are the work-from-home policy, maternity &amp; paternity leave policy and vacation policy. Keep in mind that <a href="https://triplebyte.com/blog/want-hire-best-programmers-offer-growth?ref=httgp.com">senior engineers tend to prefer these more</a> than the usual perks like your swanky office space or your office pet.</p>
<h4 id="includethesalaryrange">&#x1F4B0; Include the salary range</h4>
<p>You&apos;re the one offering the job, and you have a budget for it, so be upfront about the salary range. I&apos;ve run into a few interviews where the recruiter would insist I quote a number first &#x2014; I politely declined, because it shows that they&apos;re trying to lowball me.</p>
<h4 id="highlightvisasponsorshipandrelocationassistance">Highlight visa sponsorship and relocation assistance</h4>
<p>If you&apos;re building a diverse team, you&apos;re bound to have candidates apply from different parts of the world. It is immensely helpful when a job listing includes searchable keywords like <code>visa</code>, <code>sponsor</code> and <code>relocation</code>.</p>
<h5 id="goodexamples">Good examples</h5>
<ol>
<li>A lot of <a href="https://www.thetileapp.com/en-us/tile-careers?ref=httgp.com">Tile</a> listings say what the candidate will be doing in 1, 3 &amp; 6 months on the job.</li>
<li><a href="https://monzo.com/careers/?ref=httgp.com">Monzo</a> explains all the steps in its interview process.</li>
</ol>
<h3 id="beniceandsetexpectations">Be nice and set expectations</h3>
<p>Everyone involved in the interview process, from HR (or the People department, because it is 2019) to the engineers have to make sure they&apos;re friendly &amp; empathetic to the candidates.</p>
<h4 id="bewarmandwelcomingateverystep">Be warm and welcoming at every step</h4>
<p>Once the candidate&apos;s resume has been screened and you&apos;re initiating the interview process, have a friendly tone in all communications &#x2014; check if it is a good time before every call, let them know they can ask questions any time of anyone and that you&apos;d respond and tell them that it is okay to reschedule whenever something comes up.</p>
<p>When I interviewed for <a href="https://indix.com/?ref=httgp.com">Indix</a>, they sent me a take-home assignment right on the night of my birthday. I was just getting off a flight to go see <a href="https://www.instagram.com/p/BNAKOPZgpLE/?ref=httgp.com">Coldplay live</a> that night, so I called HR and told them that I couldn&apos;t work on it for the next 2 days. They were very understanding and let me take additional time, and I went on to join the Indix family.</p>
<h4 id="letthecandidateknowwhenyoullwritethemnext">Let the candidate know when you&apos;ll write them next</h4>
<p>Make sure the interviewer knows when the hiring coordinator can get back to the candidate, because there&apos;s going to be a &quot;do you have any questions for me?&quot; at the end of most rounds and the candidate can ask &quot;when do I hear back about the next step?&quot;. Make sure you have an answer for that.</p>
<p>Explaining the next steps along with timelines can help reduce the candidate&apos;s anxiety a lot. They&apos;re probably at a regular day-job, interviewing at other places too and really hoping they get this job. <strong>If they get rejected, still write back and let them know so they get closure and can move on</strong>.</p>
<h4 id="givefeedback">Give feedback</h4>
<p>They&apos;ve spent a good amount of time for you (as have you), so it is only fair that you explain why they got rejected. Any good engineer would love to hear constructive feedback to learn from their mistakes and fix their flaws, so you&apos;d be doing them a great service by providing feedback from their interviewers. <strong>If you don&apos;t have good feedback to give, you&apos;re doing it wrong.</strong></p>
<p>This also creates goodwill and they can still recommend other engineers to apply for jobs at your organization.</p>
<h4 id="setexpectationsforeachround">Set expectations for each round</h4>
<p>Let the candidate know what they&apos;ll need for each round (for example, a computer for screen-sharing). Share the agenda for each meeting &#x2014; what will be discussed, who will be in the meeting, how long the meeting will last, etc.</p>
<h3 id="maketheinterviewchallengingyetfun">Make the interview challenging yet fun</h3>
<p>The technical interview is the most important of the interview rounds, and also the most difficult to get right.</p>
<h4 id="avoidhackerrank">Avoid HackerRank</h4>
<p>Try to avoid a standardized test tool like HackerRank (at least for senior roles) &#x2014; time-bound tests are stressful, and real-world problem-solving is wildly different; engineers have the wealth of time, experience of their teammates and Google/StackOverflow to draw from.</p>
<p>Prefer submissions via Github instead &#x2014; there are more signals to glean from; their <code>git</code> history &amp; commit discipline, documentation, CI/CD integration, etc. It is also easier for interviewers to quickly clone it and try it out themselves.</p>
<h4 id="givethemawelldesignedassignment">Give them a well-designed assignment</h4>
<p>Design a take-home test that is mapped to a real-world problem or something the candidate will actually be solving at the job. For example, you can build out an app with missing pieces and ask them to fill them in.</p>
<ol>
<li>Set expectations upfront about what you expect (for example, unit tests).</li>
<li>Make sure engineers at multiple levels (at least 1 senior and 1 junior) review them. This is to reduce bias and make sure the reviewing is balanced.</li>
<li>Walk through the code with the candidate, and create opportunities for refactoring or adding small features - this can help interviewers understand the candidate&apos;s thought process (and as a bonus, eliminate candidates that have plagiarized code).</li>
</ol>
<h4 id="faceoff">Face/off</h4>
<p>When the candidate comes in for a face-to-face interview (or perhaps the post-assignment rounds are conducted via teleconference if the candidate is remote), make sure you set them at ease. A small amount of small-talk (about the weather or local news, perhaps) can help make the candidate feel comfortable.</p>
<ol>
<li><strong>Do not white-board!</strong> The take-home assignment&apos;s solution has already proven their problem-solving abilities. Do not spring an algorithm or any other random problem at the candidate. Instead, pose a systems design or an architecture problem and ask them to solve it, perhaps from the domain/field that the candidate will work in. Set expectations upfront and gently nudge them towards the solution. If they can&apos;t get to it, explain the solution anyway and move on to the next problem.</li>
<li>Play to their strengths; dive deep into their strengths and assess their level of technical proficiency and self-awareness.</li>
<li>Culture fit - look for flexibility, their ability to introspect and their overall character traits.</li>
<li>Remember that the goal of an interview is to understand the candidate&apos;s problem-solving abilities, and not to paint them into a corner by asking really hard questions.</li>
</ol>
<h3 id="treatthemlikeanemployee">Treat them like an employee</h3>
<ol>
<li><strong>Please do not ghost them</strong> - they&apos;ve put in time and effort in the interview process, as have you. Make sure you pass on feedback if the candidate asks for it.</li>
<li>If they get rejected, let them know when they can apply next or offer similar positions in other teams.</li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[No, I need that headphone jack!]]></title><description><![CDATA[<!--kg-card-begin: markdown--><div class="flex-centered">
    <img data-src="/content/images/2017/11/headphone-jack-love.svg" alt="headphone-jack-love" class="lazyload blur-up" height="180px" width="320px" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAICAYAAADJEc7MAAAACXBIWXMAAAsSAAALEgHS3X78AAABhUlEQVQYlW2RPUhbYRSG33PuzU1jUkULLYW2UwtCBRVRSGLBWGsJwdogKojgH+ggiDrp6ODm5uTUpYiD1MWCFBSn0kEFCxaq8YdA8FpaEn/CTXLzfadTxMFne97pgRfyq5/+bnQ+wj3I8hTJ6Duj5KmOOkuG3hAAMBzl8wWsJQCQowGWxRECALs3VHW+9mPlJpsrnveEZg7fv/6Ud4v5hJ3+nPpQ7zWPL16wQU45IPxxtuOZOPIczeM/WfYixNRz7SqtlZonZioopXwm9xVEFsz131N+y0J4ZPig5dTOBzx4EH/qT636SZy0JjCBAMhtvgBaUGDH0cikFbJZDRBpBRReGuphWYWzCcJauccgAAqA9jAZLmi5yjIT5r90MpfJ2LvKDSe2vn5LAliXsXampX1tdwenr131xMscygEoCrbLLHOi8stOHi3RmDcYeTsHAIMTk1RK+tMbZgC4Gmh9nIw3fj+O1W4cRF5VAsBJtIbQHu/iYGtb9X13lLjsbgqcdTZYd7f/AlCZgRD6emcAAAAASUVORK5CYII=">
</div>
<p>I&apos;ve <a href="https://www.facebook.com/ganesh.prasannah/posts/10155059500339632">ranted about this before</a>, but I wanted to take another long and hard look at this whole situation &#x2014; phone manufacturers removing the headphone jack.</p>
<p>I shall now proceed to talk about how fucking stupid this is!</p>
<h4 id="_apple">&#xCA0;_&#xCA0; Apple</h4>
<p>Granted, Apple isn&apos;t the first</p>]]></description><link>https://httgp.com/no-i-need-that-headphone-jack/</link><guid isPermaLink="false">5a0eccd29bc5e77223472d98</guid><category><![CDATA[Technology]]></category><category><![CDATA[Apple]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Sat, 18 Nov 2017 07:18:53 GMT</pubDate><media:content url="https://httgp.com/content/images/2019/08/headphone-jack-love.svg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><div class="flex-centered">
    <img data-src="/content/images/2017/11/headphone-jack-love.svg" alt="No, I need that headphone jack!" class="lazyload blur-up" height="180px" width="320px" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAICAYAAADJEc7MAAAACXBIWXMAAAsSAAALEgHS3X78AAABhUlEQVQYlW2RPUhbYRSG33PuzU1jUkULLYW2UwtCBRVRSGLBWGsJwdogKojgH+ggiDrp6ODm5uTUpYiD1MWCFBSn0kEFCxaq8YdA8FpaEn/CTXLzfadTxMFne97pgRfyq5/+bnQ+wj3I8hTJ6Duj5KmOOkuG3hAAMBzl8wWsJQCQowGWxRECALs3VHW+9mPlJpsrnveEZg7fv/6Ud4v5hJ3+nPpQ7zWPL16wQU45IPxxtuOZOPIczeM/WfYixNRz7SqtlZonZioopXwm9xVEFsz131N+y0J4ZPig5dTOBzx4EH/qT636SZy0JjCBAMhtvgBaUGDH0cikFbJZDRBpBRReGuphWYWzCcJauccgAAqA9jAZLmi5yjIT5r90MpfJ2LvKDSe2vn5LAliXsXampX1tdwenr131xMscygEoCrbLLHOi8stOHi3RmDcYeTsHAIMTk1RK+tMbZgC4Gmh9nIw3fj+O1W4cRF5VAsBJtIbQHu/iYGtb9X13lLjsbgqcdTZYd7f/AlCZgRD6emcAAAAASUVORK5CYII=">
</div>
<img src="https://httgp.com/content/images/2019/08/headphone-jack-love.svg" alt="No, I need that headphone jack!"><p>I&apos;ve <a href="https://www.facebook.com/ganesh.prasannah/posts/10155059500339632">ranted about this before</a>, but I wanted to take another long and hard look at this whole situation &#x2014; phone manufacturers removing the headphone jack.</p>
<p>I shall now proceed to talk about how fucking stupid this is!</p>
<h4 id="_apple">&#xCA0;_&#xCA0; Apple</h4>
<p>Granted, Apple isn&apos;t the first major <abbr title="Original Equiment Manufacturer">OEM</abbr> to remove the headphone jack &#x2014; that dubious distinction actually belongs to Oppo. Back in 2012, Oppo launched the <a href="http://t.old.oppo.com/index.php?q=mobile%2Fproduct%2Fnewtpl&amp;name=finder&amp;tpl=index&amp;ref=httgp.com">Finder</a> without a headphone jack and bundled micro-USB earphones with the phone; they discontinued it after a year on the market, probably realizing what a colossal mistake that was. However, Chinese OEMs did not give up on the idea and continued experimenting with removing it; most notably Oppo (again, <em><em>smh</em></em>), Vivo and LeEco. They were all just that - experiments, until Apple decided to <a href="https://www.gsmarena.com/apple_iphone_7-review-1497.php?ref=httgp.com">ditch the jack in the iPhone 7</a>. And they <a href="https://techcrunch.com/2016/09/07/courage/?ref=httgp.com">had the balls to call it <em>&quot;courage&quot;</em></a>.</p>
<p>The problem with Apple making any big decision like this is, they have a humongous clout of influence and have little <em>real</em> competition. If an Android OEM removed the headphone jack, consumers can pick another. Consumers that want the Apple experience have no choice but to accept whatever Apple gives them.</p>
<h4 id="androidseeandroiddo">Android see, Android do</h4>
<p>At the original Pixel&apos;s launch event, <a href="https://www.youtube.com/watch?v=Rykmwn0SMWU&amp;feature=youtu.be&amp;t=43s&amp;ref=httgp.com">Google took a not-so-subtle jab at Apple</a> for ditching the headphone jack. This was actually on their marketing material &#x2014;</p>
<blockquote>
<p>3.5mm headphone jack satisfyingly not new</p>
</blockquote>
<p>Only a year later, Google pulled an Apple and released the Pixel 2 without a headphone jack. Why? For the glory of Satan, of course. Or probably because everyone follows in Apple&apos;s footsteps, for good or bad.</p>
<h4 id="whytheyredoingthistous">Why they&apos;re doing this to us</h4>
<p>With increasing unrest amongst the masses, various OEMs have explained why they removed the headphone jack.</p>
<ol>
<li>Google product chief Mario Queiroz <a href="https://techcrunch.com/2017/10/04/google-dropped-the-pixels-headphone-jack-to-lay-the-groundwork-for-a-bezel-free-phone/?ref=httgp.com">told TechCrunch</a> &#x2014;</li>
</ol>
<blockquote>
<p><em>&#x201C;The primary reason [for dropping the jack] is establishing a mechanical design path for the future&#x201D;</em>.</p>
</blockquote>
<ol start="2">
<li>Apple Senior VP of hardware Engineering Dan Riccio <a href="https://www.buzzfeed.com/johnpaczkowski/inside-iphone-7-why-apple-killed-the-headphone-jack?ref=httgp.com">told BuzzFeed</a> &#x2014;</li>
</ol>
<blockquote>
<p><em>&#x201C;It was holding us back from a number of things we wanted to put into the iPhone... It was fighting for space with camera technologies and processors and battery life. And frankly, when there&#x2019;s a better, modern solution available, it&#x2019;s crazy to keep it around&#x201D;</em>.</p>
</blockquote>
<ol start="3">
<li>During the same interview, Apple VP of iOS, iPhone &amp; iPad marketing Greg Joswiak said &#x2014;</li>
</ol>
<blockquote>
<p><em>&#x201C;The audio connector is more than 100 years old... It had its last big innovation about 50 years ago. You know what that was? They made it smaller. It hasn&#x2019;t been touched since then. It&#x2019;s a dinosaur. It&#x2019;s time to move on&#x201D;</em>.</p>
</blockquote>
<h4 id="whythisisanticonsumer">Why this is anti-consumer</h4>
<ol>
<li>On wanting to make thinner phones.</li>
</ol>
<p>Unless they are <a href="https://www.essential.com/?ref=httgp.com#materials">making a phone with titanium</a> or polycarbonate, OEMs must stop making thinner phones. And don&apos;t even get me started on devices with glass backs. How often do you run into a person that says <em>&quot;Man, I wish my phone was thinner&quot;</em>? If you are one of those mythical creatures, fight me 1v1. Also, look at the Galaxy S8 - it packed in more technology <em>AND</em> a headphone jack at almost the same thickess as the iPhone 7.</p>
<ol start="2">
<li>On calling the technology &quot;old&quot;.</li>
</ol>
<p>It has lasted a 100 years probably because it works. And they&apos;re ubiquitous for that reason - they&apos;re on computers, laptops, 99% of current mobile phones, car audio systems, ATMs, <a href="https://squareup.com/reader?ref=httgp.com">credit card readers</a> and a <a href="https://techcrunch.com/2016/09/07/applejack/?ref=httgp.com">whole range of accessories</a>!</p>
<ol start="3">
<li>On Apple calling it &quot;courage&quot;.</li>
</ol>
<p>Apple has historically been on the forefront of ditching obsolete technology - removing floppy drives, CD/DVD drives, etc. It made sense to remove those because they had better alternatives. The headphone jack, on the other hand, <strong>does not</strong> have a better alternative! So no, Apple, this isn&apos;t courage; your opinion is bad and you should feel bad.</p>
<ol start="4">
<li>On wireless being the future.</li>
</ol>
<p>Bluetooth headphones come with a whole bunch of problems - worse audio quality due to compression, stuttering/skipping, pairing (not to mention proprietary solutions like Google&apos;s <a href="https://android-developers.googleblog.com/2017/10/announcing-fast-pair-effortless.html?ref=httgp.com">Fast Pair</a> only worsening vendor lock-in) and limited battery life. Remember that time when headphones didn&apos;t need charing or even a battery?</p>
<ol start="5">
<li>On living with the new USB-C to AUX dongle.</li>
</ol>
<p>I&apos;d have to carry a dongle in my pocket now? The dongle has a DAC too, because <em>fuck analog</em>, right? It is just one additional point of failure. What happens when you lose the dongle? What happens when you want to charge your phone and listen to music at the same time?</p>
<h4 id="lightattheendofthetunnel">Light at the end of the tunnel</h4>
<p>A lot of people have talked about the problems with this transition, but there might be <em>some</em> positives to come out of this.</p>
<ol>
<li>Bluetooth 5.0 and <a href="https://www.aptx.com/aptx-hd?ref=httgp.com">aptX HD</a> hopefully improve the shitty audio quality that Bluetooth offers today.</li>
<li>Better waterproof devices - the iPhone 7+ and Pixel 2 are both <a href="http://www.resourcesupplyllc.com/PDFs/WhatDoesIP67Mean.pdf?ref=httgp.com">IP67</a> rated, admittedly because they got rid of the headphone jack. But then the Galaxy S8 also has a IP67 rating <em>and</em> a headphone jack, so &#xAF;\_(&#x30C4;)_/&#xAF;.</li>
<li>More room for a bigger battery!</li>
</ol>
<p>While I&apos;m being cautiously optimistic of this change, I will hold on dearly to my headphone jack-equipped phone and all of my headphones. Apple and Google can pry them away from my cold dead hands.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[LQIP Battle Royale]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>
    <div class="toast toast-success">
        <h6><i class="icon icon-time"></i> Update #1 - Nov 21, 2017</h6>
        The post has been updated with changes to <a href="#primitive-issue">primitive&apos;s usage notes</a>.
    </div>
</p>
<p><abbr title="Low Quality Image Placeholders">LQIPs</abbr> are an interesting technique to lower page load times, and are effectively used in websites like Google Images, Medium, Pinterest, Facebook &amp; Quartz. In this post, I will discuss my</p>]]></description><link>https://httgp.com/lqip-battle-royale/</link><guid isPermaLink="false">5a0d5eb90a3b75487f5e596e</guid><category><![CDATA[Web]]></category><category><![CDATA[Technology]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Fri, 17 Nov 2017 11:48:56 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>
    <div class="toast toast-success">
        <h6><i class="icon icon-time"></i> Update #1 - Nov 21, 2017</h6>
        The post has been updated with changes to <a href="#primitive-issue">primitive&apos;s usage notes</a>.
    </div>
</p>
<p><abbr title="Low Quality Image Placeholders">LQIPs</abbr> are an interesting technique to lower page load times, and are effectively used in websites like Google Images, Medium, Pinterest, Facebook &amp; Quartz. In this post, I will discuss my experiences with some of the techniques, what worked best for me and how I implemented them on this website.</p>
<h4 id="lqip101">LQIP 101</h4>
<p>Low Quality Image Placeholders are just that &#x2014; pre-generated low-quality images (read: <em>much</em> smaller size) that can be displayed as placeholders while the page loads. We then use some Javascript to then lazily load the original full-quality image and replace the placeholder with it.</p>
<p>LQIPs were <a href="https://www.guypo.com/introducing-lqip-low-quality-image-placeholders/?ref=httgp.com">introduced in 2013</a>, and have since been heavily adopted by many websites and in many interesting ways. <a href="https://medium.freecodecamp.org/using-svg-as-placeholders-more-image-loading-techniques-bed1b810ab2c?ref=httgp.com">&quot;How to use SVG as a Placeholder, and Other Image Loading Techniques&quot;</a> by Spotify&apos;s Jos&#xE9; P&#xE9;rez talks about the various techniques out there.</p>
<h4 id="lqipshootout">LQIP Shootout</h4>
<p>If you&apos;re a blogger on Medium, you already have LQIP out of the box. However, if you&apos;re running anything else, you&apos;re <em>probably</em> gonna have to implement your own solution.</p>
<p>I primarily tested 2 libraries &#x2014;</p>
<ol>
<li><a href="https://github.com/fogleman/primitive?ref=httgp.com">primitive</a></li>
<li><a href="https://github.com/zouhir/lqip?ref=httgp.com">lqip</a></li>
</ol>
<p>All benchmarks were run on a mid-2014 13&quot; Macbook with a 2.6 GHz Intel Core i5 processor and 8 GB of DDR3 RAM.</p>
<p>Drag the slider (the little <kbd>&#x205E;</kbd> sign) on the left corner to swipe &amp; compare images.</p>
<h5 id="primitive">primitive</h5>
<div class="comparison-slider">
  <figure class="comparison-before">
    <img src="https://httgp.com/content/images/2017/11/me.jpg">
    <div class="comparison-label">Original (254 KB)</div>
  </figure>
  <figure class="comparison-after">
    <img src="https://httgp.com/content/images/2017/11/me-primitive-100.svg">
    <div class="comparison-label">primitive, 100 shapes (16 KB)</div>
    <textarea class="comparison-resizer" readonly></textarea>
  </figure>
</div>
<div class="comparison-slider">
  <figure class="comparison-before">
    <img src="https://httgp.com/content/images/2017/11/me.jpg">
    <div class="comparison-label">Original (254 KB)</div>
  </figure>
  <figure class="comparison-after">
    <img src="https://httgp.com/content/images/2017/11/me-primitive-25.svg">
    <div class="comparison-label">primitive, 25 shapes (4 KB)</div>
    <textarea class="comparison-resizer" readonly></textarea>
  </figure>
</div>
<br>
<p>primitive took around 9 seconds to generate 25 shapes, and a whopping <strong>26</strong> seconds to generate 100 shapes.</p>
<p>A few things to keep in mind when using primitive are &#x2014;</p>
<ol>
<li>primitive can give better results for geometric images than photos and photorealistic images, as it has to write fewer SVG paths.</li>
<li><span id="primitive-issue"></span><s><em>Currently</em>, primitive has trouble generating SVGs when the source image has transparencies &#x2014; see <a href="https://github.com/fogleman/primitive/issues/54?ref=httgp.com">#54</a>. This might not be a deal-breaker if you&apos;re not using transparent images, but <a href="https://httgp.com/adding-search-to-ghost/#buildingawebworkerforfusejs">I love making flowcharts</a> which tend to have large transparent areas to keep PNG sizes small.</s> I should&apos;ve RTFM! I was using the command-line wrong, as pointed out by another user <a href="https://github.com/fogleman/primitive/issues/54?ref=httgp.com#issuecomment-346067124">here</a>.</li>
</ol>
<h5 id="lqip">lqip</h5>
<div class="comparison-slider">
  <figure class="comparison-before">
    <img id="image-original" src="https://httgp.com/content/images/2017/11/me.jpg">
    <div class="comparison-label">Original (254 KB)</div>
  </figure>
  <figure class="comparison-after">
    <img id="image-lqip" src="data:image/jpeg;base64,/9j/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAAIAA4DASIAAhEBAxEB/8QAFgABAQEAAAAAAAAAAAAAAAAAAAQG/8QAIRAAAQQCAgIDAAAAAAAAAAAAAQIDBBEABQYxEhMhQZH/xAAVAQEBAAAAAAAAAAAAAAAAAAADBf/EABoRAAICAwAAAAAAAAAAAAAAAAERAAMCBBX/2gAMAwEAAhEDEQA/AK9fI43EnJTEktz2iy4tfoRZsVVE0K7++8y215hoy8QzpZCwD8+bgTX4DjGU+jeScnDGlUE5/9k=" width="1080" height="810">
    <div class="comparison-label">lqip (475 B)</div>
    <textarea class="comparison-resizer" readonly></textarea>
  </figure>
</div>
<br>
<p>lqip took only <strong>0.4</strong> seconds to generate the output! It is incredibly fast as all it does is generate a <code>14x14 px</code> image. You might&apos;ve noticed that the output is pretty close to Medium&apos;s implementation.</p>
<h4 id="javascriptmagic">Javascript magic</h4>
<p>Whatever LQIP implementation you choose to use, you&apos;ll need some JS on the frontend that will let you load LQIPs and swap them out with the original image when the user scrolls to the image&apos;s location.</p>
<p><a href="https://github.com/aFarkas/lazysizes?ref=httgp.com">lazysizes</a> is a lightweight JS library that lets you lazy-load images, and demos a <a href="https://github.com/aFarkas/lazysizes?ref=httgp.com#lqipblurry-image-placeholderblur-up-image-technique">specific example for LQIPs</a>. With the script injected in your page, you just have to &#x2014;</p>
<ol>
<li>Add <code>class=&quot;lazyload&quot;</code> to your <code>&lt;img&gt;</code> tag.</li>
<li>Rename the <code>src</code> attribute to <code>data-src</code>.</li>
<li>Include your LQIP URL or data URI to <code>src</code>.</li>
</ol>
<p><strong>Example</strong></p>
<pre><code class="language-html">&lt;img data-src=&quot;/path/to/your/image.jpg&quot; class=&quot;lazyload&quot; src=&quot;data:image/jpeg;base64,/9j/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAALAA4DASIAAhEBAxEB/8QAFgABAQEAAAAAAAAAAAAAAAAABgQF/8QAJhAAAQMDAgUFAAAAAAAAAAAAAgEDBAAFERMhBhIiMTJBQoGhsf/EABUBAQEAAAAAAAAAAAAAAAAAAAIF/8QAGxEAAgEFAAAAAAAAAAAAAAAAAQIAAwURFSH/2gAMAwEAAhEDEQA/AK4VlslrnozKkxTEmnDJWz1CFRxtyjlfXPxRq7Xfg8XcCst3C+xlE/VSh1uecjN3RWDJtWmzEML4pt2rClSHdZev6qiLzVYlhDrlUjJ7P//Z&quot;/&gt;
</code></pre>
<h4 id="summary">Summary</h4>
<p>While SVG is a compelling format for image placeholders, a few minor hurdles stop me from fully embracing them - <s><a href="#primitive-issue">primitive&apos;s issue</a> with transparent PNG backgrounds,</s> longer processing times and the overall <em>jagged-ness</em> (which, to me, looks more jarring than artsy/cool). For now, I&apos;m sticking with lqip&apos;s super-small downsized images, as the output size is manyfold smaller while still being comparable to existing solutions like Medium&apos;s.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Adding search capabilities to Ghost]]></title><description><![CDATA[Learn how to implement a lightweight and fast fuzzy text search library, Fuse.js, in Ghost blogs.]]></description><link>https://httgp.com/adding-search-to-ghost/</link><guid isPermaLink="false">5a09692d6b34e9128d9c2f3d</guid><category><![CDATA[Ghost]]></category><category><![CDATA[Javascript]]></category><category><![CDATA[Code]]></category><dc:creator><![CDATA[GP]]></dc:creator><pubDate>Wed, 15 Nov 2017 07:36:44 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://ghost.org/?ref=httgp.com">Ghost</a> is a relatively new blogging platform that I&apos;ve been hacking away at lately, and what powers this blog. It started as a Kickstarter campaign by John O&apos;Nolan (former deputy lead for the WordPress UI team) and got funded in 11 hours. It was publicly released in 2013, and has slowly been adding more features and polish.</p>
<p>I primarily chose Ghost because it is built on top of Node.js (my current favorite language), even though the platform is still lacking quite a number of features, search for example.</p>
<p>In this post, I&apos;ll show you how I added really fast browser-side Full-Text Search (FTS) capabilities to my Ghost installation.</p>
<h4 id="researchingexistingsolutions">Researching existing solutions</h4>
<p>While the Ghost team is <a href="https://github.com/TryGhost/Ghost/issues/5321?ref=httgp.com">debating how to build this feature</a>, I couldn&apos;t wait and decided to hack something together.</p>
<p>A quick Google search shows up a few solutions &#x2014;</p>
<ol>
<li><a href="https://cse.google.com/cse/?ref=httgp.com">Google Custom Search</a></li>
<li><a href="https://github.com/jamalneufeld/ghostHunter?ref=httgp.com">ghostHunter</a></li>
</ol>
<p>I didn&apos;t like Google Custom Search because I wanted the search experience to be seamless (and, well, not look like &#x1F4A9;).</p>
<p>ghostHunter, while seeming like a compelling solution, depends on jQuery and <a href="https://lunrjs.com/?ref=httgp.com">Lunr.js</a>, which together add roughly <code>120 KB</code> to the page. In keeping with the whole philosophy behind my blog&apos;s UI, I wanted something really lightweight.</p>
<h4 id="sayhellotofusejs">Say hello to Fuse.js</h4>
<p><a href="http://fusejs.io/?ref=httgp.com">Fuse.js</a> is a really small (<code>10 KB</code>) <em>fuzzy</em> search library with zero dependencies, and is a perfect fit for a small site like mine.</p>
<p>Using Fuse.js is pretty straightforward &#x2014;</p>
<ol>
<li>Import the library.</li>
<li>Fetch all your search-able data.</li>
<li>Build the search index.</li>
<li>Query the index.</li>
</ol>
<p>When implementing it on this blog, I wanted to use <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers?ref=httgp.com">Web Workers</a> to make sure the heavy JavaScript stuff <a href="https://www.html5rocks.com/en/tutorials/workers/basics/?ref=httgp.com">doesn&apos;t block the UI thread</a> &#x2014;  I&apos;d really recommend reading the linked articles to learn more about Web Workers.</p>
<h4 id="buildingawebworkerforfusejs">Building a Web Worker for Fuse.js</h4>
<p>On page load, a worker is initialized, which fetches all posts data from Ghost&apos;s API and uses Fuse.js to build an index. When the user submits a search, a message is posted to the worker to start a search. The worker receives this, searches on the now-built index, and posts back the results; this message is received by the main page, which finally displays the results.</p>
<div class="flex-centered">
    <figure class="figure">
        <noscript><img src="https://httgp.com/content/images/2017/11/using-web-worker-for-search-in-ghost.png" width="600" height="1000"></noscript>
        <img data-src="/content/images/2017/11/using-web-worker-for-search-in-ghost.png" class="lazyload blur-up" width="600" height="1000" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAXCAYAAAA7kX6CAAAACXBIWXMAAAsSAAALEgHS3X78AAADMUlEQVQ4jX2ST2hcVRTGv3PfnXnvzcybMYmDTQxNayroprbYtQuRrFy30LoSF0LXgogL1+LGiAQKopU4G0VwoYLQuEpUUkIrVSs0mTZp2sxL5jmZeW/u+3PvPa6CMRn6wW91+O45535HMrNob9y7pC2TdCQAgIgAANZaALg7Ozu7hiOSKysr7nPNoDXwPdT9OoqiQKPRAACkaYYkjucBHDfGcYzq2bOoWgYIcF0XzAwioFIps+sG+qgJAKSUEr7nAQQQCMwMADCGkWWgfl8HI43GGJNl2fdHCzsdU/npx/jFft+cefQoD6amygMAePUZuEsdZHJubi4H8PpR4x83b709NeO/NCOEHQ4fXm2320NmxjeLiyc/vXjxawAAM9Nhtre3p1kPeD/vcZoOWSnFB7q/GfHW1sN5CQB3YvinOn3klXV0eh4AWCbn52S/L8ihQgqRPTU+Bt/3cXI6oDT1fpWrq6ulYOv2R1tuGaVhE9Wqheu6li3fqNVr0LpYmpmZ+eXY7ywv/1lRUYej5B/uJTlnWcaHtbOz8/lu+LjMzO4hHDBfFsycMJuhylVaFEWaZVm6vj7I9/ayXKn0Y2YuH+z/3RefCADA++/dez7sMHcjZrb/dWpvpNztFlwUen5vlyvHcux2xbOt1uMfms3S5uUrEzfZMoy14tRptwEYxLG6kxX66faDB4O438fCB2823/mwtSmVcta+vL731tqtv3pX3rikDl68f/u3zyLfRw3eeTCPCcfJ6/U6zr12dTzNzcKoa0IYhtP8BHU64YL8avFv8fIrzRc8lcALCpvmAru7u16u7bukFGTJiRxZTRrjQUJSsOM4NDExdleeO69LlcHGt33fR67G4JcNJicnrTHcNQ5hmKbXT09NtEbkuOxrlbDKFSd5zlrr/40VRdG1qFs0mNk5BIlarcaOVzEOSqZEZIiE0dqYpRs9o5Q19Xp94PtSE5E5BEvHcQBAMGvoogCVy5CyjAsXAriugLWm0tvX4uikIgxDAkBx7FDULVEYgno9TUEdJAQI4OLECRkfOwBmttba34OAUK1KEAFEgDEAkYW1douIeGRuzOw9ATnK8y/cFznoBpKkkAAAAABJRU5ErkJggg==">
    </figure>
</div>
<h4 id="thedirtydetails">The dirty details</h4>
<p>In the Ghost theme&apos;s <code>default.hbs</code> template, this snippet is included inside the body&apos;s <code>&lt;script&gt;</code> block &#x2014;</p>
<pre><code class="language-javascript">// Set up search options for the /posts API.
// You can learn more about the API here - https://api.ghost.org/docs/posts
var searchFilter = {
  limit: &apos;all&apos;,
  include: &apos;tags&apos;,
  formats:[&apos;plaintext&apos;],
  fields: &apos;id,url,title,plaintext,description,tag,featured,published_at&apos;
};
var searchEndpoint = ghost.url.api(&apos;posts&apos;,searchFilter); // `ghost` is globally available.

// Initialize Web Worker.
var indexer = new Worker(&apos;/assets/js/indexer.js&apos;);
// Set up message listener.
indexer.onmessage = function(event) {
    var searchResults = event.data;
    // Implement logic to display search results.
    // ...
}

// Post message to begin fetching posts data and build search index.
indexer.postMessage({searchEndpoint: searchEndpoint});

// On click of the search button, post message to search.
var search = function(searchString) {
    indexer.postMessage({searchString: searchString});
};
</code></pre>
<p>Next up is the Web Worker - <code>indexer.js</code>.</p>
<pre><code class="language-javascript">// Import the Fuse.js library.
// Note that `importScripts` is a native API available only in Web Workers. 
importScripts(&apos;/assets/js/fuse.min.js&apos;);
var fuse;

// Message listener.
self.onmessage = function(event) {
    var searchEndpoint = event.data.searchEndpoint;
    if (searchEndpoint) {
        // Initial request for fetching posts data &amp; building search index.
        var request = new XMLHttpRequest();
        request.open(&apos;GET&apos;, searchEndpoint, true);
        request.onload = function() {
            if (request.status &gt;= 200 &amp;&amp; request.status &lt; 400) {
                var postData = JSON.parse(request.responseText);
                var searchOptions = {
                    // This distribution is entirely customizable.
                    keys: [
                        {name: &apos;title&apos;, weight: 0.3},
                        {name: &apos;plaintext&apos;, weight: 0.2},
                        {name: &apos;description&apos;, weight: 0.2},
                        {name: &apos;link&apos;, weight: 0.1},
                        {name: &apos;tag&apos;, weight: 0.15},
                        {name: &apos;id&apos;, weight: 0.05},
                    ]
                };
                fuse = new Fuse(postData.posts, searchOptions);
            }
        };
        request.send();
    } else {
        // Search request.
        var searchString = event.data.searchString;
        var searchResults = fuse.search(searchString);
        self.postMessage(searchResults);
    }
}
</code></pre>
<h6 id="featuredetection">Feature detection</h6>
<p>Although <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers?ref=httgp.com#Browser_compatibility">most modern browsers support Web Workers</a>, it is probably a good idea to detect support for them, so all of the code is wrapped in &#x2014;</p>
<pre><code class="language-javascript">if (window.Worker) {
    // ...
}
</code></pre>
<h4 id="possibleoptimizations">Possible optimizations</h4>
<p>If you&apos;ve noticed, the entire Fuse.js index stays in memory &#x2014; this doesn&apos;t scale well as the blog grows bigger. To better cope with growing content, the index can be cached in <a href="https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API?ref=httgp.com">IndexedDB</a> with a way to bust it (perhaps based on the most recent post&apos;s published timestamp).</p>
<h4 id="wrapup">Wrap up</h4>
<p>The search feature is actually live on this website right now! Go on and <a href="#search">take it for a spin</a>! As the site grows in size, I&apos;ll be writing more about solving this problem at scale.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>