Remix.run Logo
whartung 6 hours ago

What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.

My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.

I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.

The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.

And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)

TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!

Aurornis 2 minutes ago | parent | next [-]

This would take about 1 day for some student to realize you can instruct one of the LLMs to operate the computer screen for you and have it type and fake edit a document for you. The tip would spread among the cheaters and the metric would become harder to judge by itself.

nlawalker 5 hours ago | parent | prev | next [-]

Typing as a service is a whole cottage industry on Etsy.

ssl-3 4 hours ago | parent [-]

That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)

Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.

Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...

vunderba 5 hours ago | parent | prev | next [-]

Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.

However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.

Dylan16807 10 minutes ago | parent [-]

Are you sure about that? I could easily see this happen with a web document link, but for a docx file the change tracking is off by default and pretty obtrusive. Basic metadata would be fine, formatting might be quirky but that's not exactly a smoking gun...

eichin 4 hours ago | parent | prev | next [-]

Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)

djmips 4 hours ago | parent [-]

In general I love the idea of turning printers into typewriters. I've been thinking about how to do it with an inkjet printer.

tejtm 5 hours ago | parent | prev [-]

arms race....

oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.

vunderba 5 hours ago | parent [-]

You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.

In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.

https://en.wikipedia.org/wiki/Loebner_Prize

djmips 4 hours ago | parent [-]

Wow it feels like the Loebner prize went away right at the dawn of the LLM. Is it correlated?

vunderba 4 hours ago | parent [-]

Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.

Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?

But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.

artikae 3 hours ago | parent | next [-]

Goodhart's Law vs the Turing Test! Can our humans accurately evaluate intelligence, or will they be fooled by fakes? Live this Sunday!

djmips 3 hours ago | parent | prev | next [-]

I think it would be great to be revived with a different premise.

leptons an hour ago | parent | prev | next [-]

>because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.

Isn't that really what all these AI companies are doing too? It sure seems like it is.

Moonye666 an hour ago | parent | prev [-]

[dead]