Week 12: Minerva Modeling Week B

Psyc 2001
MINERVA II
Programming a basic MINERVA II model in R
Author

Matt Crump

Published

November 17, 2023

Modified

November 17, 2023

We are on Week B of learning about instance theory and MINERVA II.

Week B: Follow along with a tutorial that I will post about how to implement a MINERVA II model in the R programming language. Our goal will be to model a generic recognition memory procedure. We will create a set of items, present the model with some of the items, and then give it a recognition memory to see if it can tell the difference between new and old items

Week C: By week 3 we should be in a position to put digital representations of pictures into a MINERVA II memory model, and then simulate a recognition memory procedure for pictures. This simulation will allow us to make predictions from the theory about performance in picture recognition memory tasks.

Let’s get started.

Week B: Follow along with this tutorial

Below you will find a screencast and some R code that implements a MINERVA II model. The goal for this assignment is to attempt to get the code working in your blog post.

Install libraries

You will need to install the following libraries:

  1. devtools
  2. RsemanticLibrarian
  3. lsa
install.packages('devtools')
devtools::install_github("CrumpLab/RsemanticLibrarian")
install.packages('lsa')

Basic MINERVA II model

vector_length <- 20
num_items <- 100

# make random feature vectors for each item
# Rows are items
# Columns are features
items <- t(replicate(num_items,
                   sample( rep(c(1,-1), vector_length/2) )
                   ))

# define indices for old and new items
old <- 1:50
new <- 51:100

# put old items into a memory matrix
memory <- items[old,]

# get an item to probe memory
probe_id <- 60
probe <- items[probe_id,]

# compute similarities between probe and all traces
similarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)^3

# activate traces by weighted similarity
activations <- memory * c(similarities)

# generate echo and global activation
echo <- colSums(activations)
global_activation <- sum(similarities)

# compare echo to probe
lsa::cosine(probe,echo)
          [,1]
[1,] 0.8216834
global_activation
[1] 0.272

Generate all unique items

In the above code, feature vectors for each item were generated completely randomly. As a result, it is possible that different items were coded as being identical. This would happen if they were randomly assigned the same feature vector. The function below can be used to generate sets of feature vectors that are all unique.

generate_unique_random_items <- function(num_items = 100,
                                         vector_length = 20){
  items <- t(replicate(num_items,
                   sample( rep(c(1,-1), vector_length/2) )
                   ))

  correlation_matrix <- cor(t(items))
  upper_triangle <- correlation_matrix[upper.tri(correlation_matrix)]
  if (length(upper_triangle[upper_triangle == 1]) == 0){
    return(items)
  } else {
    print(1)
    generate_unique_random_items(num_items,vector_length)
  }

}

items <- generate_unique_random_items(num_items = 100, vector_length = 20)

Evaluating recognition memory for items

General goals:

  1. create 100 unique random feature vectors to represent 100 different items that could be presented in a recognition memory experiment.
  2. Create a memory that gets the first 50 items as traces in the memory. This mimics an encoding phase where a person is shown different items
  3. Conduct a simulation to mimic a recognition memory test. This involves presented new and old items as cues, and generating responses from the memory.
  4. Evaluate the models performance to see if can discriminate between old and new items.
vector_length <- 20
num_items <- 100

# make random feature vectors for each item
# Rows are items
# Columns are features
items <- items <- generate_unique_random_items(num_items = 100, vector_length = 20)

# define indices for old and new items
old <- 1:50
new <- 51:100

# put old items into a memory matrix
memory <- items[old,]

# create a data frame to hold model results
simulation_data <- data.frame()

## loop through all items as cues to memory

for(i in 1:100){
     
  # get an item to probe memory
  probe_id <- i
  probe <- items[probe_id,]
  
  # compute similarities between probe and all traces
  similarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)^3
  
  # activate traces by weighted similarity
  activations <- memory * c(similarities)
  
  # generate echo and global activation
  echo <- colSums(activations)
  global_activation <- sum(similarities)
  
  # compare echo to probe
  echo_cue_similarity <- lsa::cosine(probe,echo)
  
  trial_results <- data.frame(item = i,
                              global_activation = global_activation,
                              echo_cue_similarity = echo_cue_similarity)
  
  # add trial results to simulation data frame
  simulation_data <- rbind(simulation_data,trial_results)
}

library(tidyverse)

simulation_data <- simulation_data %>%
  mutate(item_type = rep(c("old","new"), each = 50))

simulation_means <- simulation_data %>%
  group_by(item_type) %>%
  summarize(mean_echo_cue_similarity = mean(echo_cue_similarity),
            mean_global_activation = mean(global_activation))

knitr::kable(simulation_means)
item_type mean_echo_cue_similarity mean_global_activation
new 0.8287554 -0.04848
old 0.9828778 0.89568