Publications by vboyce
gtl-may-2024
Are they human? ## # A tibble: 2 × 6 ## # Groups: game_cond, chat_cond [2] ## game_cond chat_cond no yes `NA` pct ## <chr> <chr> <int> <int> <int> <dbl> ## 1 may2024 chat 14 48 2 0.774 ## 2 may2024 no_chat 15 46 1 0.754 Yay, they mostly think they’re playing with a human!! Looks like we have 3...
2794 sym 14 img
lazy-content-anlaysis
Prelim plots Being as lazy as possible. We take the everything-the-speaker-said-on-a-trial and count the number of occurances of different classes of words (defined by dictionary / regex). Then we look at change over time/condition in average number of these words average pct of words these words make up pct of games/trials that had any of those w...
1071 sym 16 img
tg-content-newer
Pre-chunking TODO for spellchecked column, remove double spaces before running for future Run chunk script Post chunk stuff ## Test passed 🎊 check things that weren’t substrings should get resolved into a copy of the pre_abstract file verify only substrings find what wasn’t used prep for abstract TODO how to do substring verification w...
662 sym
tg-content-old
Goal and Method We have lots of language data from tangrams experiments. My goal here is to quantitatively take a finer grained look at the language used to get some sort of grasp on how language changes over the course of a game, with a focus on what description elements stay or go. Method This sample is the 4-player rotate games (~ 20 games tota...
7790 sym 20 img
vlm-tangram
Thoughts for later ? try more comparable to tg-matcher in presenting full-ish transcripts? should we retrain on not these tangrams, only others? (is there a pre-trained model that achieves this?) ? should we split the utterances somehow and look at fit of words/phrases (i.e. to feed to CHAI? or do drop out analysis or ….) Analyses of just CLIP...
2914 sym 27 img
update clip tangrams
Thoughts for later ? try more comparable to tg-matcher in presenting full-ish transcripts? should we retrain on not these tangrams, only others? (is there a pre-trained model that achieves this?) ? should we split the utterances somehow and look at fit of words/phrases (i.e. to feed to CHAI? or do drop out analysis or ….) Analyses of just CLIP...
2584 sym 14 img
clip-tg
Analyses just of this How often is the highest likelihood label the correct one? Split by tangram. We know that tangrams vary in codeability. By probability assigned Alternative is to look at how much probability the correct answer got. confusion matrices Of top option. Of probability mass. Compare with people Basically, we want to know how th...
1197 sym 10 img
summary past gtl
Conditions All experiments did both language and no language versions as a between groups manipulation. Expt 1: all PD v all BoS (between subjects) - PD from sampling 3 values 1-9, 0 for lowest (sucker payoff) - BoS: off diagonal payoff of 1, others from 2-9 Expt 2: mix of PD and BoS - PD from sampling 3 values 1-9, 0 for lowest (sucker payoff) - B...
3979 sym 16 img
gtl old mixed (with spikes)
Pre-process Read data Summary of experiment In the expt reported here pairs of participants played 40 rounds of a game-theory type game. At the start, each pair had 3 minutes of free chat, and then played the game. We recruited for 20 games in chat and 20 games in no-chat conditions. 4 “spiked” BoS trials where one of the rewards is high (25...
5291 sym 23 img 1 tbl
tg-matcher-2
Experiment try the experiment Boring stuff Read in data Bonus Timing This is clock time over the whole experiment (paying attention or not) How long are individual trials taking? if we exclude the > than 1 minute ones (as plausible got distracted doing other things), mean rts: So, 10ish seconds per trial generally. Accuracy Average accuracy...
2431 sym 19 img