I was pleasantly surprised earlier this week to discover how easy it is to lazy load images in Lightning*.
Recently I watched TED talk by Tim Ferris detailing his process for defining your fears. The video (embedded below) is well worth watching, but what struck me about it is how similar it is, albeit at a macro scale, to a project planning process to which I'd recently been introduced: the pre-mortem.
I'm not sure what what I should attribute that I had not previously encountered a project pre-mortem, as it's been discussed since at least 2007. The exercise, though, is brilliant in its simplicity. Early in the project (and the earlier the better), ask the team to imagine the project is done. The site has launched...and it's been a total failure. Then each person presents what they believe killed it*. In the particular project where I saw the technique used, we came up with a gamut of causes: technological and architectural failures, organizational politics, failure to get consensus from the wide array of stakeholders, even funding being yanked due to shifting priorities. For each person on the team, the identified cause was indicative of their biggest concern about the project.
Once that list is generated, the next step is to come back to the present and, similar to what Ferris describes in his talk, think through what can be done to prevent or mitigate each of the identified project killers. From that point, we're back in the realm of more common project risk management.
* In theory, you could get the same list from the team by asking them what the biggest risks to the project are. That approach, though, keeps the whole exercise in the intellectual sphere. My experience actually doing the pre-mortem exercise was that imagining the future and projecting ourselves into it made the whole thing more visceral and, I think, lead us to risks we wouldn't have come up with in the more traditional "what are the risks" approach.
More info on pre-mortems
Yesterday, Acquia open sourced Reservoir, a new distribution designed for building headless Drupal instances. The Reservoir team provided a composer project command for setting up a Reservoir instance easily, but it doesn't bundle a VM. Fortunately, making BLT work with Reservoir isn't difficult. There are, though, a few steps to be aware of.
To get started, run the composer project to build a new BLT instance.
composer create-project --no-interaction acquia/blt-project MY_PROJECT
Once that completes, you need to add reservoir and (optionally) remove the lightning distro
composer require acquia/reservoir
composer remove acquia/lightning
Next, update the blt/project.yml file. The key changes you'll want to make here (beyond setting a new project prefix, etc) are a) changing the distro from ligthning to reservoir and b) removing views_ui from the modules:enable list for local environments.* An excerpt of my git diff for this file looks like...
- name: lightning
+ name: reservoir
- enable: [dblog, devel, seckit, views_ui]
+ enable: [dblog, devel, seckit]
Once that's done, continue with the BLT setup process from Step 4 (assuming you want to use Drupal VM. Step 5 otherwise).
One other thing you'll have to do is to create a public/private key pair and update the settings in /admin/config/people/simple_oauth. These are apparently created in the installation process if you go through the UI based installation, but doing the BLT process described here bypasses it.
* If you don't remove views_ui, the world won't explode or anything, but when you run blt setup you'll get errors reported like the ones below:
blt > setup:toggle-modules:
[drush] dblog is already enabled. [ok]
[drush] The following extensions will be enabled: devel, seckit, views_ui, views
[drush] Do you really want to continue? (y/n): y
[drush] Argument 1 passed to [error]
[drush] must implement interface
[drush] Drupal\Component\Plugin\PluginInspectionInterface, null given, called
[drush] in /var/www/mrpink/docroot/core/modules/views/src/Entity/View.php on
[drush] line 281 and defined PluginDependencyTrait.php:29
[drush] E_RECOVERABLE_ERROR encountered; aborting. To ignore recoverable [error]
[drush] errors, run again with --no-halt-on-error
[drush] Drush command terminated abnormally due to an unrecoverable error. [error]
[phingcall] /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:370:8: /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:374:12: /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:377:69: Drush exited with code 255
[phingcall] /Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/setup.xml:350:45: Execution of the target buildfile failed. Aborting.
BUILD FAILED/Users/barrett.smith/Desktop/mrpink/./vendor/acquia/blt/phing/tasks/local-sync.xml:12:30: Execution of the target buildfile failed. Aborting.
; 2 minutes 37.24 seconds
Working on setting up commenting, which is highly suggested for sites who's content appears on Drupal Planet, I came across a bit of a confusing situation in regard to URLs in content. When using the "Limit allowed HTML tags and correct faulty HTML" filter, one of the option is to add rel="nofollow" attributes to anchor tags. However, in the default Plain Text format, the "Convert URLs into links" filter does not provide that option. So if a user types in an HTML anchor, nofollow gets added. But if they type in a plain URl, it gets converted to an HTML anchor without the nofollow.
To illustrate, if I allow anchor links to be entered as html and set the option to add rel=nofollow and I also enable the filter to convert URLs to links, if a user enters:
www.nytimes.com Another NY Times link
The output HTML in the comment is:
For commenting, I really want to tighten permissions down as far as I can to avoid potential security risks, so the Plain Text format with the "Display any HTML as plain text" filter is the best choice1. However, for usability I do want URLs converted to links. But I also want those links set to nofollow for link fraud prevention2.
By playing with format filter configurations and ordering I was able to make a solution that works (albeit a little janky-ly), but it sure feels like this is an area where a core patch could improve the situation. If I have time one day maybe I'll work on that3.
The solution I came up with is to set the following filters on the input format (the order is significant):
- Display any HTML as plain text
- Convert URLs into links
- Convert line breaks into HTML (i.e.
- Limit allowed HTML tags and correct faulty HTML
Then for the allowed HTML tags, I allowed <a href hreflang> <p> <br> and checked the `Add rel="nofollow" to all links` option.
The result is that user entered HTML is rendered as plain text, then URLs and line brakes get converted to HTML, and finally the Limit allowed HTML filter double checks the markup and adds `rel="nofollow"` to anchor tags. So given a user input comment like in the screen shot below, the resulting HTML is:
<h2>This should not be displayed as an h2 element.</h2>
<a href="www.example.com">If this is a link to example.com and not www.nytimes.com, you've failed.</a>
Now, this solution is not perfect. Mostly, it's hinky to set up and I hate that I have to allow any HTML, even if user input is first stripped to plain text. Secondly though, it's also a user experience problem. As you can see in the picture above, the help text says that no html is allowed and that the anchor, break, and paragraph tags are allowed.
1. using the core commenting facility at least. Add on tools like Disqus obviate the issue but I don't want to go that route. I also don't want to require (or even allow) users to register before commenting. And yes, I do require approval of comments before they are visible, but I don't want to have to remember to add rel=nofollow in links.
2. Yes. I want to eat my cake and have it too.
3. I was put on this earth to achieve certain things. At this point I'm so far behind I'll never die.
I've been a regular user of the task-tracker Toodledo and their iOS app for several years now and one limitation that I run into frequently is that the app does not provide a way to create multiple tasks in a single entry.
By combining the Drafts and Workflow apps, however, it's possible to work around that. You can see the workflow I set up (and import it to your own Workflow instance) at: https://workflow.is/workflows/4e8d4123dc3444b8831b0bd036764527
First, in Toodledo’s web application, enable email importing for your account (see the help page at https://www.toodledo.com/info/help_email.php). Then in the Workflow app create a workflow which splits input on the new line character and for each input line sends an email to your secret Toodledo import address using the input line as the subject line of the email.Then import that work flow into the Drafts app using the Add to Drafts option in workflow’s settings.
Worklow app settings page showing Add to Drafts option.
Once this is all set up, when you want to create a set of tasks in Toodledo from your iOS device, simply create a new draft with each task on a separate line and using the standard email import notation in Toodledo to set priority, the folder, star the task, etc.
The end result is that a Draft with content of
Test task creation
Test task creation with stars *
Test task creation with priority !!!
Will result in tasks in Toodledo like in this screenshot:
List of created tasks in Toodledo iOS app
A couple of team repos I work on have, over time, accumulated feature and integration branches which are no longer needed. Best practice is to clear these branches out once the code they contain is merged, but "the best laid schemes of mice and men..."
So I found myself facing a git repo with several dozen unneeded branches and no patience to clear them one at a time. The solution to the problem is the command below.
Warning : This command will destroy all but the branches named "master" and "develop" in your remote repo. Use with EXTREME care. If this were a Drupal module, it would have a dependency on Bad Judgement. If you use it and destroy the wrong branches, may $deity have mercy on your soul (because the other developers in the repo you nuke will not).
git branch -r | grep -v 'master\|develop' | cut -d"/" -f 2 | sed 's/^/git push origin :/' | bash
I've been reading several books on the process of software development and management of late (see reading list below) and have begun trying to determine the nature of my own management philosophy. This is an ongoing process in my head, but below are some of my initial thougts (summarized predominantly in platitude-ish sayings, because even though my inner snarker hates them that's how my mind works).
Hopefully the team I work with wont read this and feel that my behavior is not consistent with these. I think this how I actually do things, but I'm as fallbile as anyone.
I fail. The team wins.
As a team lead, at the end of the day, I'm the one finally responsible for making sure what we produce meets specification in terms of quality, delivery time, and budget, but I'm not (usually) the person with hands on the keyboard building out the product. Because I'm not the person actually building things, I cannot fairly claim the credit when the product development effort is successful. But at the same time, as the one with final resonsibility, I do have the onus to take the blame if the effort fails.
So when the client is happy, it's my duty to pass the glory to the team members who actually did the work. But when the client is pissed, I should be the one who takes the heat. (This is not to say, though, that I will not subsequently give heat to those more directly responsible. The difference is that the heat the developers get should come from me, not from the client.)
This is not to say that I should get no credit for success. If those I report to are worthy of the title of manager, they should be able to see my efforts in the success of my team. (So maybe another way to summarize this is that the team succeeds at doing things and I succeed at helping the team succeed?)
My job is to work my way out of a job
If the team can't get along without me, I'm not doing my job. The team's processes should be sustainable and self-directed to the point that I do have to guide them on a day-to-day basis. If that's not the case, I have failed to either establish meaningful processes or to train the team on how to carry out those processes.
In the same fashion, the team should not need to come to me frequently to answer "how-to" questions. The mentoring part of my role should be helping guide the members of the team to being able to solve problems on their own. Socrates did not simply tell his students what the answers were, he asked questions to help them draw out the answers on their own. When devs come to me with questions, I should be helping guide them to the answers in the same way so that they not only find the answer to the immediate question but learn the process for thinking through problems so they can resolve the next question on their own. (I recognize this is probably at times annoying as heck to the team. Sorry guys, but I'm probably going to keep answering questions with questions.)
Fear is adaptive
This one is maybe less management-y than general process, but there are times when you, as a developer, absolutely should be afraid. Fear's entire purpose is to make you cautious. When you're about to launch a new site or make changes directly in the production database you should definitely be afraid. The trick is to think through the fear and let it make you cautious and circumspect in your actions rather than letting it paralzye you or lead you into errors. In these situations, slow down, think, and act deliberately.
Continuous integration and automated testing can significantly reduce the odds of regressions, but eventually, every project will fnd themselves facing a feature that used to work and no longer does. When that time comes for you, I recommend git bisect.
As the name suggests, git bisect cuts a commit range in half over and over until the commit in which the regression occurred is identified. Just start it up, give it the SHA of a bad commit, the SHA of a known good commit, and for each cadidate commit it identifies tell bisect if it's good or bad.
To illustrate, a client with strict user privacy policies was interested in the reCAPTCHA module but concerned about the data that is sent back to Google. Digging through the issue queues, they found an old issue marked as "Fixed" that added code to stop the setting of 3rd-party cookies, but that functionality isn't in the module any more. Nothing in the log messages explicitly mentioned removing the code but I knew the SHA of a "good" commit and what to look for in each commit to see if the code was still there, so it was bisect time.
$ git bisect start $ git bisect bad $ git bisect good 0177a3 Bisecting: 59 revisions left to test after this (roughly 6 steps) [24487c097fcbf7b686574c168b1c5a815bf96475] Change wording $ grep -in recaptcha_nocookies recaptcha.admin.inc $ git bisect bad Bisecting: 29 revisions left to test after this (roughly 5 steps) [410cc62fcbf0870fb83a2573baf90c98987d3bc2] Issue #1959274: Recaptcha misspelled in recaptcha.js. $ grep -in recaptcha_nocookies recaptcha.admin.inc 41: $form['recaptcha_nocookies'] = array( 44: '#default_value' => variable_get('recaptcha_nocookies', FALSE), $ git bisect good Bisecting: 14 revisions left to test after this (roughly 4 steps) [f3876faf1fa26f48049e3b0fa20a74da2d6112ad] Issue #2407929 by drupalexio: Declare api.js from google.com as external.
As illustrated in the command line output above, you start off bisect using the appropriately named `git bisect start`, telling it the current commit is bad, and giving it the SHA of the commit in which the patch was added as a good commit. Bisect then halved the intervening commits and picked the middle one. I tested the commit by grepping for the presence of the option field in recaptcha.admin.inc, told bisect that commit was bad, and the processes repeated until finally I tested the last "middle point" commit and bisect came back with my answer. Apparently, in 2014 the module went through a major restructure and with it went the nocookie option.
$ git bisect good e57ea9bf1f8be27bab4f76333e3ea37923f68ca8 is the first bad commit commit e57ea9bf1f8be27bab4f76333e3ea37923f68ca8 Author: diolan Date: Tue Dec 9 17:37:24 2014 -0500 Issue #2386815 by Liam Morland, diolan: Copy google_captcha module and rename to recaptcha. :100644 100644 56b7d18783d1c5c90540f9f87442fd65ea74a2fb 82f1b08071522032e72d3970e1ddf50225d42a71 M README.txt :000000 040000 0000000000000000000000000000000000000000 702db26b7720bd69a4389a2399867487643667bd A ReCAPTCHA :040000 000000 ff906537502e2a72c116337843f044866717f231 0000000000000000000000000000000000000000 D recaptcha-php-1.11 :100644 100644 5f13e38c9d89e33891314142035dea7fdbb97fb6 b6226020ce75ad36a5c707a11aeb0d48491c8f59 M recaptcha.admin.inc :100644 100644 1300d7fbb6539b6dad7885dc3c200c28a62b12b9 a0232a5f4c88f4555e041e0ec66f2b72d60dd445 M recaptcha.info :100644 100644 4b7927f644ab99da4976a8ff18e10a9c27598429 ae9b3eed2fd62c9f024831cf34263e34f8718583 M recaptcha.install :100644 000000 a0adb80d30402163b98497efae5ab5c279db0ce1 0000000000000000000000000000000000000000 D recaptcha.js :100644 100644 642681921045429e4de1da8aef0b648cca96dbe3 6903e91f6a47c3b5895d27329657b77bf98f5dcd M recaptcha.module :100644 000000 9fc4d0f25661964521d5b85560079a2c8550e740 0000000000000000000000000000000000000000 D recaptcha_mailhide.info :100644 000000 9fc5e9c958571ceae1cba1354102c2b244978aca 0000000000000000000000000000000000000000 D recaptcha_mailhide.module
[Fixed] Get valid RSS by adding CDATA wrappers to RSS description elements
A couple weeks ago, I posted about a problem I was having getting RSS to validate because the description element contained HTML markup.
For ages I've thought that I should get this site included on the Drupal Planet feed, and with the new build in D8 and a currently renewed momentum to actually write here, this seemed like a good time to actually do it. Should be simple, right? Tag my content, allow commenting, submit the RSS feed url for inclusion, profit. Not so much.
The hang up is that the RSS generated by the core taxonomy term feed view doesn't validate, according to the W3C validator; looks like it's gacking on the inclusion of unencoded HTML tags within the Description not being in fields labeled as CDATA.
Yes, this is a thing I can fix in a few different ways, but doesn't it seem like using an RSS feed view should just automagically handle all that drek?