Viral adoption, or just creepy?

Earlier this week, I had a very short, very disconcerting Teams meeting.  Shortly after the meeting began, I and the other participant noticed an AI bot show up.  Neither of us were aware we’d invited it, which as you can imagine, was puzzling and more than a little disconcerting to both of us.  Believing discretion is the better part of valor, we both agreed to abandon the meeting, and I did some googling.

It turns out it was my fault (ultimately) the bot showed up in the meeting.  Last week, I’d attended a meeting where one of the hosts had invited this bot to take notes.   Following the meeting, the bot sent a meeting summary to all the attendees (so far, so good).  Having missed a specific point I wanted to look up, I tried accessing the meeting transcript, and was prompted to create an account.

What I hadn’t seen was the (in my opinion) extremely aggressive default behavior on the part of this platform to just go ahead and invite itself to any and all of my future meetings – even those in which I was a guest!

In hindsight, there are at least a couple of lessons to be learned from this experience.  First, as a consumer, all those damned EULAs actually matter.  Choose your connections with care, and consider the permissions you bestow on apps.  It may be a bit inconvenient to pass on an app that seems a bit greedy, but every once in a while, it just may save your bacon.

The other lesson is offered from the perspective of a product designer, and it requires a bit of empathy with respect to your user / audience.  Please consider - realistically - the behaviors your users might actually expect of your product.  In the case of this meeting note-taker, it’s appropriate to have the bot offer its services when I create meetings.  Ideally, I’d have liked to see something interactive the first time the bot has the opportunity to hop on a meeting, and if I elect, “yes”, I think it would be appropriate to ask if I want that to be the behavior for future meetings.  Offer a service to me, but don’t surprise me by joining behind my back!  

I also believe there should have been distinct behaviors for meetings I host vs. meetings I join as a guest.  In my opinion, it would be appropriate for the bot to ask to attend any meeting I join as a guest.  Finally, when joining, the behavior I saw in the meeting was very poorly-sequenced.  When the bot joined my meeting unexpectedly, it hung around for a good 30 seconds before offering anything in the chat window – a definite faux-pas.  I’d expect to immediately see a message indicating who invited the bot and offering actions to stop the recording if desired.  Having this message show up half-a-minute into the meeting is no bueno.

The era of AI is upon us.  Platforms can now do more for us than ever before, and what we’ve seen so far is just the beginning.  However, usability / UX practices need to be engaged for this journey so that these capabilities can be seen as helpful and not subversive or destructive.  Help you expect will always be welcome, but please beware helping in ways that aren’t expected - at best, this will be interpreted as “handling”, and at worst, it’s an unwelcome intrusion.  The first interactions you have with a new customer are precious - don’t botch them and risk losing a customer forever by overstepping your welcome!

ChatGPT-assisted analysis

I was looking through a recording of a web page download in my browser's developer tools the other day, and began ticking off a number of calls to third-party domains. Before you know it, I had opened a document and started counting calls to domains, and it occurred to me -- perhaps ChatGPT could help with this.

So, I opened it up and began asking questions:

Are you able to analyze the page load requests of a web site and summarize calls to third-party web sites (with descriptions of those sites)?

And yes, it turns out, it was able to help. It asked me to upload a HAR file (with instructions about how to generate and save it in developer tools), and began to analyze per my request. I went back and forth a couple times, asking for clarification and tweaks, and after four or five rounds, ChatGPT let me know I'd used up my free analysis allocation.

But then, I noticed a little link the bot had been leaving for me, like a trail of breadcrumbs:

This link popped up a code block that contained the actual analysis for each step:

Going all the way back to the beginning of my chat, grabbing these steps, and dropping into a jupyter notebook, I found they ran almost as-is. Just a couple tweaks to the HAR file load locations were needed to make these scripts work in my local environment.

There are defintitely still adjustments you'd make when investigating specific challenges with / for specific web sites, but this experience demonstrates a few powerful principles:

  1. Use of ChatGPT for analysis. This isn't news to most, but I was quite impressed by the speed with which ChatGPT made progress on this problem. With no code written on my part, ChatGPT made enough progress to help me understand more about my problem quite quickly.
  2. Use of ChatGPT for code generation. Seeing the code generation popping out of the analysis steps without having specifically asked for it -- that was new to me, and a very pleasant surprise. Being fairly new to Python development, it was helpful to see this code as examples of the analysis techniques, including...
  3. Use of HAR traces for website analysis. Seeing this chewed up by the example Python code was a helpful illustration of this technique.

If you're interested in seeing a jupyter notebook illustrating this example, you can find it here:
https://github.com/dlambert-personal/gpt_assisted_har_analysis

Ten years of Nvidia

I haven't been writing actively for a couple years, and having a breath to catch up, I'm working on remediating that absence. I just took a quick scan through drafts I'd started, and I tripped over one from 2013 (yes, ten years ago). The draft was really an email I'd sent pasted into the editor to be picked up later. At the time, it may have been somewhat interesting, but in hindsight, I think it may be more impactful. Here's the email - the original context was me trying convince my ex-wife that a game development class my son was interested in wasn't a waste of time:

I sort of hinted at some reasons I thought the "intro to gaming" class [my son] was looking at next semester might be more useful than it sounds, but I probably wasn't very clear about it.

Here's an announcement from this morning from a graphics card mfg:

http://www.anandtech.com/show/7521/nvidia-launches-tesla-k40

http://www.engadget.com/2013/11/18/nvidia-unveils-tesla-k40-and-ibm-deal

http://www.anandtech.com/show/7522/ibm-and-nvidia-announce-data-analytics-supercomputer-partnership

If you actually look at the picture of the card, you'll notice there's nowhere to plug in a monitor, because this graphics card isn't really a graphics card in the same sense that we think of them.

As these cards have become crazy-specialized & powerful in the context of their original purpose, their insane number-crunching capabilities haven't been lost on people beyond game developers.  In the same way that you see these cards supporting physics engines in games, they're now becoming more and more commonly used in scientific and engineering applications because this same type of computing can used for simulations and other scientific uses.

As I mentioned to [my son] when he was working in Matlab, the type of programming needed for this sort of processing is very different from the traditionally-linear do-one-thing-then-do-the-next-thing style of programming used for "normal" CPU's -- by its nature, it has to be asynchronous and massively parallel in order to take advantage of the type of computing resources offered by graphics-type processors.

Cutting to the chase, even though "game programming" probably sounds like a recreational activity, I think there's a decent chance that some of the skills touched on in the class might translate reasonably well into engineering applications -- even if he doesn't necessarily see that during the class.

Damn. Right on the money. Ten years on, Nvidia rules AI. Why? It's the chips and the programming model. All the reasons I cited to give game programming a chance have come to pass, and for what it's work, the kid wound up using Nvidia GPUs to build neural networks in grad school.

Game programming, indeed.