So researchers are starting to use chatbots as fake people from whom they can extract data about real people. And their whole deal is that they are designed to act like people. AI bots, on the other hand, will do whatever you tell them to, practically for free. ![]() People are hard to wrangle, and the setup costs for human experimentation are considerable. It's difficult to model something like Twitter - or to do any kind of science, really - using actual humans. "Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?" wondered Petter Törnberg, the computer scientist who led the experiment. They had created a model of a social network in a lab - a Twitter in a bottle, as it were - in the hopes of learning how to create a better Twitter in the real world. The scientists had used ChatGPT 3.5 to build the bots for a very specific purpose: to study how to create a better social network - a less polarized, less caustic bath of assholery than our current platforms. Meanwhile, in our world, the not-simulated world, a bunch of scientists were watching. ![]() Then the 500 robots logged into something very much (but not totally) like Twitter, and discussed what they had read. ABC News reported that Alabama students were throwing "COVID parties." On CNN, President Donald Trump called Black Lives Matter a "symbol of hate." The New York Times had a story about the baseball season being canceled because of the pandemic. On a simulated day in July of a 2020 that didn't happen, 500 chatbots read the news - real news, our news, from the real July 1, 2020. Account icon An icon in the shape of a person's head and shoulders.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |