This is, perhaps obviously, a mixture of both the opinions of someone that has sat through too many online sermons (even before the Age of COVID) and the trade of someone that has given too many online conference talks and presentations.
That said, as it pertains to preaching online...
Is this an online-first presentation, or are you using streaming as a tool to connect to your parish? While some recommendations (e.g. worry about audio before video) are universal, many of the recommendations you'll read or hear or see online make assumptions about which of these styles of preaching is your primary goal. I'll do my best to call out those assumptions here, but nothing else matters until you know this answer.
For some folks this is obvious. For others it's a serious consideration. For others still, you'll need both at different times, and so you'll need to build two separate online preaching practices.
Presentations are, by and large, the simpler of the two. Speak to the camera, and use software to record or stream the output. There are dozens of simple tools like mmhmm and Zoom that are optimized for this style of communication, and there are seventy times seven takes online for how to improve them. I'll include a few of the less-obvious highlights and improvements regardless:
Connecting your parish is, in my opinion, significantly harder. You don't want to detract from the experience of those in the pews, but you don't want your online congregants to feel like second-class citizens, either.
The tech you choose (covered below) can certainly help, but your primary job—much like the presenters, above—is to reduce the impact of distractions to those joining remotely, and keep your connection to those online from being a distraction from the message. Keep the format simple to follow. The length of the sermon is, IMO, less of an issue when folks can join in person, but the lengths of the "chunks" becomes more important instead.
No matter the format—presentation or parish—work with your tech folks. Thank them. Listen to them. If your role includes the autonomy to do so, help them understand their budget. Their work might not be literal magic, but they will need your help to balance the hundreds of minor details it takes for the magic trick of making the tech disappear.
It's a touch counterintuitive, but when you're talking about streaming video, audio matters more. If you're going to spend a little money on gear, spend it on a nicer microphone before you spend it on a camera.
A little can go a long way. Often the main limitation with video is not gear, it's placement, focus, and lighting.
Slides might be the hardest part to implement. Your goal is to make them as simple as possible to run.
The most important decision lies between cutting between camera feed(s) and slides, or trying to superimpose the slides over the video feed. Either requires tech to pull off, but both the slide design and the tech itself need to optimize for one or the other. (If you already know your software is solving this problem for you, skip this section.)
If you're cutting, you're turned a tech setup problem into a communication problem. Make sure the tech team knows when to cut to slides, and when to show the preacher. Some folks will find this intuitive, and others will find it completely baffling without a lot of practice. Pairing them together can help immensely, but doubles the load on your volunteers. Short version: watch the feeds, and cut to the slides when they change (especially if the preacher likes to read from them). Count slowly to 3 when they've finished talking about or referencing what's on the slide, and cut back.
If you're superimposing, you're going to sacrifice legibility for folks in the room. Update slide styling and templates to use the lower quarter-to-third of the screen, with a completely black background. Your software should be able to overlay this on the camera feed, or you can use a device like an ATEM Mini to manage the math. The result should be the text on the bottom, and the preacher behind and above. This setup can lower volunteer burden significantly, as the preacher can now control the overlay. If they're finished being bathed in text, they can transition to an all-black slide.
If you're considering a recording, the closer you can get to the "sensor" (the microphone, the camera) the better the quality will be. Recording from Zoom might be the easiest, but will also be the lowest-quality option.
A good middle ground is to record all audio you're sending (e.g. from the mixer) and the video feed, and combine them in software afterward.
If you are live-streaming your sermon and the software has a chat feature, make it someone's responsibility to watch it. If you're connecting a parish, make sure they're in the room. If you're presenting, assign a leader or elder to moderate from their own connection.
It is my personal opinion that the connectivity provided by the Internet is here to stay, and that God's global Church has been given the gift of gathering across our silly societal boundaries. I also know that these technologies are nascent and changing, challenging us to share solutions and give grace to one another while we figure out how to preach to God's people well in this present age.
Take these points with a grain of salt. Change them to fit your needs. Share them, if they helped.
God bless you in the incredible work you've been invited into.
At some point in every software developer’s career (and likely so for other knowledge work), they are subjected to a tomato-based hazing ritual known as "the Pomodoro Technique".
The process is always the same:
The Initiate overhears the Senior Engineer Cabal during one of their cloaked rituals—the Morning Standup—as they describe the number of Pomodoros a task will take. During their open office peer one-on-one, the Initiate vocalizes their curiosity of how Italian tomatoes map to man-months, to which the Senior Engineer coyly responds:
INSERT_TIMER_APP_HEREto keep track of your Pomodoros. All other timer apps suck.
And off the Initiate goes, never to be seen again. Not because they’re working, but because they’re testing alternative Pomodoro tracking apps. Only 173 to go...
Even if this story is only a shoddy attempt at half a polite chuckle, the Pomodoro Technique is both very real, very misunderstood, adored, and occasionally even effective.
What about those interruptions? Leaving distractions—called “internal interruptions” in the lingo—aside for a moment, what are you supposed to do if others interrupt you?
According to the original Scripture, you are supposed to “inform effectively, negotiate quickly to reschedule the interruption, and call back the person who interrupted you as agreed.”
For such a prescriptive schedule of work, there’s nothing to acknowledge when that interruption is to be rescheduled for. Furthermore, as a leader I’m interrupted constantly, so this is not a small issue: it’s core to how I need to work.
Interruptions kill Pomodoros. This is why so much of the language, writing, and culture of Pomodoro users talks about “protecting the Pomodoro.”
When your work is about interruption—when your work is about being the interruptable one—then you yourself have taken the Pomodoro Technique to a farm upstate.
I’m still working on the same problem, but the solution I’m using today works better than I initially expected:
For a few months now I’ve had a small device running in the corner of my office. On it displays the current time, and most importantly a buzzer goes off at :25 and :55 when it’s “time” to take a break, and at :30 and :00 when that “break” would be over.
Do I actually take those breaks? Almost never, but the little buzzer keeps me informed of the break time I’m missing through my various meetings and interruptions, and my subconscious does a remarkable job of nudging me each time the buzzer goes off just how much of a debt I’ve accumulated. Practically speaking I only get to pay up a couple times a day, but now it’s become a conscious, guilt-free choice both to take the interruption and to take the break(s) I’ve neglected.
When I first started using the Pomodoro Technique circa 2009, my emotional takeaway was the feeling that I’d taken control of my time, no longer controlled by it. My tomato timer may be dead and gone, but this new timer has resurrected that feeling of control and determination, and I look forward to working with it for even longer.
I have a relatively ecclectic library of music. In the above screenshot is an English electronic band, a French disco artist, and two American bands: one rock, and one bluegrass.
With a library like this, it can be downright jarring to shuffle everything.
Each music app has its own way of dealing with this problem, but none do so completely. Playlists? Terrible if a song fits more than one mood or genre or whatever. Tagging? Way too manual and finicky. Recommendations? Too biased and will eventually converge on one style of music.
I've dealt with each long enough to know how to get them to work, but I had really hoped there was a better way. After more than a decade of these sorts of jagged, jarring, manual, and often lackluster listening experiences, I had an idea: what if I could shuffle my library by album, rather than by song?
I know this isn't a novel idea. I have two of those oldschool, drum-based CD changers, each of which supports this exact feature. That said it's always felt like an ancillary feature, and hard to control. What if I could see a queue of those CDs, filter out CDs that I don't feel like today, etc.?
I have plenty of issues with Spotify. They don't pay their artists enough. It's a terrible solution for long-term library "storage", as they're constantly breaking whether a song or album is Saved to your collection. Their library of music is incomplete in selective ways. Their recommendations are White-centered.
That said they're the best option to try out this idea, and this was an opportunity to try Glitch.
But using Spotify for this worked great! If you go to the ABRPT app, you can (after Glitch warms up the app) log in with Spotify and see a preview of what random 8 albums ABRPT wants to queue up. Clicking Apply appends those albums (song-by-song due to a limitation of the Spotify API) to the end of your Play Queue.
If you don't have an active Spotify device, this may fail; start Spotify playback on the device you want, and try again.
If you want to check out the code, it's all available here on Glitch. Check it out, and let me know what you think!
TL;DR: The Today Display holds a single 2.5" by 3" index card, just big enough for the most important notes and tasks for the day. The STL is available on Thingiverse.
This design was inspired by the Analog Kickstarter. I liked the idea of a card holder to localize my daily tasks, but I already have a "productivity system". Everybody does. I already have a rolling backlog of work that needs to be done. Most people do. All I wanted was a card holder.
Furthermore, a full-size index card (3"x5") is too big for a single day's tasks: cutting the cards in half produces just the right size for the most important work I need to accomplish, and nothing more.
I keep yesterday's card around long enough to help update my team with any leftover notes. With any luck, though, I've spent a few moments at the end of the previous day starting today's card:
At the top of the card, I put the date. Along the left edge of the card are bullets: a dot for a task, and a dash for some critical but not actionable detail. As mentioned above, if I think it won't fit on the card, it belongs somewhere else.
Don't overthink it.
Did you ever have a random idea that you couldn't get out of your head? The most ridiculous, out-of-nowhere idea provides its own rationale as it lives between your ears. So it was with this logo.
Initially I was just playing around with various forms. As with most visual work, I doodled and doodled and doodled. Eventually, I started playing around with two of my favorite characters: the ambersand (&) and the question mark (?).
If you'll excuse my over-the-top nerdiness here, take a second and think about these little things: on the one hand, a visual portmanteau of the Latin word "et", and on the other a "lightning flash" to end a sentence. Beautiful! Unique! Absolutely incredible.
As I was playing with them, I realized you could flip the ampersand, and connect it to the question mark. It was a really fun corruption of those symbols, and it stuck in my brain.
At some point in a project I realized I'm losing steam—maybe it's interest, maybe time, maybe energy—and my solution to that is always the same: lower the bar, and ship. Now that I had a logo idea that I liked, I dropped it into Sketch and shipped it as fast as I possibly could. Plain color. "Good enough" Bezier curves fitting the transition between the two characters. Export to SVG. Move on.
After I deployed it, though, it haunted me a bit: did it "mean" anything? Well, no. Did it need to? Well, no. If it did, what would it?
Again, the logo doesn't mean anything. It's not special. It's an accident, and I love it. But your brain needs it to mean something. If you're like that, too, take your pick:
And now that I've published even this ridiculous screed, my brain can rest.
I really thought I had finally built a simple, maintainable, modern, single-page application.
When I returned to this project after only six months, every build resulted in nothing but errors. Errors as far as the eye could see.
No environment should be this hostile.
You just can't walk away.
I've never experienced anything like this from a programming environment, before or since.
For better or worse, browsers remain a ubiquitous application host, and one required of any business, no matter how small.
So what do you do?
One option might be compile-to-JS, and I want to keep an eye on that space. None of those options are so mature or so stable or so accessible that they can really compete, though! And culturally, they feel just as churn-driven as the environment they hope to replace.
Perhaps ironically, the reason not to is that it feels like untested water. So many modern applications moved away from 100% server rendering, touting it as "faster" and "better for users".
Is that still true? Who is pushing the boundaries of an alternative?
I've used too many to expect the next to be any different, and I've been using them too long to expect them to improve.
I'm disappointed because I really want this ubiquitous, accessible development environment to change.
I'm disappointed because I still feel it WILL change, and I might miss it.
I'm disappointed because I can't wait for it to change to stay here, on the web, making apps.
The final episode of my Project Kong series, a fun (and adorable, if I do say so myself) demonstration of how it works.
This video touches on the basics of what we need from Bluetooth and the little RedBear we installed earlier in the series: tracking. Bluetooth, particularly the current standard for Bluetooth Low Energy, makes a good platform for wirelessly tracking physical objects.
In our case, BLE is what Kong will use to track the person it's trying to follow and protect; a small wearable on the person makes the individual “visible” to Kong.
This video is an admittedly very shallow dive into the theory of AI, all to support the question, "Where does Kong need to go from here?" If you're looking for a deeper dive, here are a few respources:
This video has our first modification to the Donkey Car; we restore the use of the original remote control. To do so we use a RedBear Duo to read the PWM signals comming from the RC receiver, forwarding them on as control codes over a serial connection made to the Raspberry Pi.
This video starts with a quality of life improvement: we move the main power switch for the car itself to a much easier-to-access location. It's been life-changing for the rest of this project. As a part of that section of the video, I show the macroscopic I've used for all the adjustments: “opening up” the car like a book, mounting the custom components to the flat platform, etc.
Also, mounting tape.
After a quick run-through of how to boot everything up, the video walks through the basics of calibration. The original documentation can be found on the Donkey Car website.
Finally, we go for our first test drive! Huzzah!
The music in this video includes “Days Like These”, “Golden Hour”, and “In My Dreams” by LAKEY INSPIRED: https://soundcloud.com/lakeyinspired
With the hardware fully assembled, this episode focuses on installing the software, connecting the car to the local Wi-Fi network, and creating the scaffolding for our Donkey Car project. This is the second of three in this series dedicated to assembling a Donkey Car without modification.
The accompanying documentation can be found on the Donkey Car website, but in particular, the following links will be helpful:
The music in this video is “Elevate” by LAKEY INSPIRED: https://soundcloud.com/lakeyinspired
The first step for Project Kong is to assemble the hardware sent by Arm for the Donkey Car. This video (originally intended for YouTube) walks through that process.
The actual hardware assembly (i.e. not the intro or outro) was the first thing I recorded for this series, and it may show. That said, I think it's a useful supplement to the official Donkey Car docs.
Best of luck, and do get in touch if you have any issues.
This video introduces the first videographed project: Kong.
Kong is a small, autonomous, beginner-friendly rover designed to help with elopement in children living on the Autism spectrum. It leverages the Donkey Car platform for the car, adding a RedBear Duo for Bluetooth scanning (and some other, wonderful capabilities revealed later).
I had the privilege of giving this talk twice more: at Bellingham Codes later that fall, followed by LibertyJS in front of the biggest audience I've ever had the pleasure of presenting to. It's a flurry of “out of the text” conclusions drawn from the ECMA-262 Specification, and it was a delight to prepare.
I've been working on a simple video series for months. I've been learning how to shoot, organize, edit, practice, re-shoot, re-edit, render, and export video. I've been prototyping the project being recorded, writing scripts, and rebuilding the project as a one man film crew. I have designed and re-designed title cards and social media icons and banners.
And then I uploaded the above to YouTube.
First, I uploaded it to my personal Google Account, but I realized it would be safer to upload it to a separate, branded account. By YouTube's own recommendation, I created a Brand Account for this content. I uploaded the first few episodes, and went to bed.
The next morning, my account was disabled. No explanation.
In their typical tone, Google “helpfully” suggested I could request my account be reinstated. I did, and went back to waiting.
After a week, they disabled my Google Account (not the Brand Account), and reenabled it. Cue pats on the back.
I've gone through that rigamarole twice more, and still no satisfactory solution or explanation. If you've ever heard me say, “Choose a vendor that cares you exist,” this is exactly what I'm talking about. If this project was reliant on YouTube (to make money, it kinda is), it was dead on arrival, and I have no recourse.
Choose a vendor that cares you exist.