Sycophancy and sandbagging 🤔

they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated

Kinda like people, no?

From the same paper I mentioned before: https://cims.nyu.edu/~sbowman/eightthings.pdf

Jobs replaced by AI, or jobs re-created by AI?

Tweet from @bentossell (I love his daily AI newsletter)

The list got me thinking… instead of framing as “AI replaces X job”, I think the actual outcome is more like “AI recreates X job”, in much the same way that ATMs recreated the bank teller’s job, and personal computers recreated the typist’s job, and Photoshop recreated the graphic designer’s job…

Implicit in this, is that change is inevitable and outcomes will favor those who best adapt.

Just some thinking aloud…

Content creator –> after AI –> Human does more editing, curating, and aggregating (eg, across different media types)

Journalist –> AI –> Human does more primary research (developing sources, interviewing), editing

Teacher –> AI –> Human does more coaching (emotional support), planning (what to learn when), problem solving (when students are stuck)

Customer service rep –> AI –> Human does more complex issue resolution, relationship building, sales development

Social media manager –> AI –> Human does more editing and curation, community and relationship building

Translator –> AI –> Human does more fact checking, editing, research

Musician –> AI –> Human does more mixing, curating, multimedia, live performance, inventing new musical styles

Not insignificant, too, that several of the jobs on the list — such as web developer or social media manager — didn’t exist in their current form as recently as a few decades ago, and were also enabled (or transformed) by similar mega waves of technological change (eg, personal computers, smartphones, the internet).

I do think AI has surprised in the following important way: Even as recently as a year ago, most people would have assumed that the creative fields (broadly, activities like making art, writing fiction, composing music) were less at risk than the more repetitive, linear, analytical fields. Today generative art and LLMs have definitively proven otherwise.

Change filled times ahead!

“Cars get better by some modest amount each year, as do most other things I buy or use. LLMs, in contrast, can make leaps.” – Tyler Cowen on AI

Must read if you’re interested in AI and its implications; Tyler’s commentary on the recent explosion of AI into the popular consciousness (driven in large part by ChatGPT) has been, in my view, the most realistic+pragmatic:

https://www.bloomberg.com/opinion/articles/2023-01-23/chatgpt-is-only-going-to-get-better-and-we-better-get-used-to-it

“I don’t have a prediction for the rate of improvement, but most analogies from the normal economy do not apply. Cars get better by some modest amount each year, as do most other things I buy or use. LLMs, in contrast, can make leaps.”

“I’ve started dividing the people I know into three camps: those who are not yet aware of LLMs; those who complain about their current LLMs; and those who have some inkling of the startling future before us. The intriguing thing about LLMs is that they do not follow smooth, continuous rules of development. Rather they are like a larva due to sprout into a butterfly.”