How to create, structure, maintain and update data codebooks in R?

Related searches

In the interest of replication I like to keep a codebook with meta data for each data frame. A data codebook is:

a written or computerized list that provides a clear and comprehensive description of the variables that will be included in the database. Marczyk et al (2010)

I like to document the following attributes of a variable:

  • name
  • description (label, format, scale, etc)
  • source (e.g. World bank)
  • source media (url and date accessed, CD and ISBN, or whatever)
  • file name of the source data on disk (helps when merging codebooks)
  • notes

For example, this is what I am implementing to document the variables in data frame mydata1 with 8 variables:

code.book.mydata1 <- data.frame(variable.name=c(names(mydata1)),
     label=c("Label 1",
              "State name",
              "Personal identifier",
              "Income per capita, thousand of US$, constant year 2000 prices",
              "Unique id",
              "Calendar year",
              "blah",
              "bah"),
      source=rep("unknown",length(mydata1)),
      source_media=rep("unknown",length(mydata1)),
      filename = rep("unknown",length(mydata1)),
      notes = rep("unknown",length(mydata1))
)

I write a different codebook for each data set I read. When I merge data frames I will also merge the relevant aspects of their associated codebook, to document the final database. I do this by essentially copy pasting the code above and changing the arguments.

You could add any special attribute to any R object with the attr function. E.g.:

x <- cars
attr(x,"source") <- "Ezekiel, M. (1930) _Methods of Correlation Analysis_.  Wiley."

And see the given attribute in the structure of the object:

> str(x)
'data.frame':   50 obs. of  2 variables:
 $ speed: num  4 4 7 7 8 9 10 10 10 11 ...
 $ dist : num  2 10 4 22 16 10 18 26 34 17 ...
 - attr(*, "source")= chr "Ezekiel, M. (1930) _Methods of Correlation Analysis_.  Wiley."

And could also load the specified attribute with the same attr function:

> attr(x, "source")
[1] "Ezekiel, M. (1930) _Methods of Correlation Analysis_.  Wiley."

If you only add new cases to your data frame, the given attribute will not be affected (see: str(rbind(x,x)) while altering the structure will erease the given attributes (see: str(cbind(x,x))).


UPDATE: based on comments

If you want to list all non-standard attributes, check the following:

setdiff(names(attributes(x)),c("names","row.names","class"))

This will list all non-standard attributes (standard are: names, row.names, class in data frames).

Based on that, you could write a short function to list all non-standard attributes and also the values. The following does work, though not in a neat way... You could improve it and make up a function :)

First, define the uniqe (=non standard) attributes:

uniqueattrs <- setdiff(names(attributes(x)),c("names","row.names","class"))

And make a matrix which will hold the names and values:

attribs <- matrix(0,0,2)

Loop through the non-standard attributes and save in the matrix the names and values:

for (i in 1:length(uniqueattrs)) {
    attribs <- rbind(attribs, c(uniqueattrs[i], attr(x,uniqueattrs[i])))
}

Convert the matrix to a data frame and name the columns:

attribs <- as.data.frame(attribs)
names(attribs) <- c('name', 'value')

And save in any format, eg.:

write.csv(attribs, 'foo.csv')

To your question about the variable labels, check the read.spss function from package foreign, as it does exactly what you need: saves the value labels in the attrs section. The main idea is that an attr could be a data frame or other object, so you do not need to make a unique "attr" for every variable, but make only one (e.g. named to "varable labels") and save all information there. You could call like: attr(x, "variable.labels")['foo'] where 'foo' stands for the required variable name. But check the function cited above and also the imported data frames' attributes for more details.

I hope these could help you to write the required functions in a lot neater way than I tried above! :)

Tutorial • codebook, and web app make it possible to generate rich codebooks in a few minutes and just Package on CRAN https://cran.r-project.org/web/packages/codebook/ index.html explaining the structure and nature of the dataset, also helps to explain and data frequently in order to find relevant variables, refresh their memory,� documents the data to make sure that the data is well understood and reusable in the future. Here we will show how to create codebooks in R using the dataMaid packages. The help pages for the datasets in R packages usually provide thorough information although the level of detail may vary quite substantially from dataset to dataset.

A more advanced version would be to use S4 classes. For example, in bioconductor the ExpressionSet is used to store microarray data with its associated experimental meta data.

The MIAME object described in Section 4.4, looks very similar to what you are after:

experimentData <- new("MIAME", name = "Pierre Fermat",
          lab = "Francis Galton Lab", contact = "pfermat@lab.not.exist",
          title = "Smoking-Cancer Experiment", abstract = "An example ExpressionSet",
          url = "www.lab.not.exist", other = list(notes = "Created from text files"))

[PDF] How to automatically document data with the codebook , Package on CRAN https://cran.r-project.org/web/packages/codebook/index. time, the high-level summary, ideally combined with text explaining the structure forth between the documentation of the data and the data itself, be that to refresh their memory The static label browser allows you to keep executing R code. @Dason, I'm interested to find a solution, using only R, that enables me to automatically create a data codebook (whenever I pull data from a database). I prioritize a simple software set up to a formatted pdf output, I might have gotten too optimistic when I saw the documentation that came with mtcars.

The comment() function might be useful here. It can set and query a comment attribute on an object, but has the advantage other normal attributes of not being printed.

dat <- data.frame(A = 1:5, B = 1:5, C = 1:5)
comment(dat$A) <- "Label 1"
comment(dat$B) <- "Label 2"
comment(dat$C) <- "Label 3"
comment(dat) <- "data source is, sampled on 1-Jan-2011"

which gives:

> dat
  A B C
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
> dat$A
[1] 1 2 3 4 5
> comment(dat$A)
[1] "Label 1"
> comment(dat)
[1] "data source is, sampled on 1-Jan-2011"

Example of merging:

> dat2 <- data.frame(D = 1:5)
> comment(dat2$D) <- "Label 4"
> dat3 <- cbind(dat, dat2)
> comment(dat3$D)
[1] "Label 4"

but that looses the comment on dat():

> comment(dat3)
NULL

so those sorts of operations would need handling explicitly. To truly do what you want, you'll probably either need to write special versions of functions you use that maintain the comments/metadata during extraction/merge operations. Alternatively you might want to look into producing your own classes of objects - say as a list with a data frame and other components holding the metadata. Then write methods for the functions you want that preserve the meta data.

An example along these lines is the zoo package which generates a list object for a time series with extra components holding the ordering and time/date info etc, but still works like a normal object from point of view of subsetting etc because the authors have provided methods for functions like [ etc.

[PDF] How to automatically generate rich codebooks from study , How to automatically document data with the codebook package to facilitate data re-use. datasets that are the product of merging and processing raw data retain the necessary metadata. Named vectors are created using the c() function. rio::export(codebook_data, "bfi.rds") # to R data structure file. The codebook ensures that the statistician has the complete background information necessary to undertake the analysis, and a codebook documents the data to make sure that the data is well understood and reusable in the future. Here we will show how to create codebooks in R using the dataMaid packages.

How to Automatically Document Data With the codebook Package to , Note, that you can also create a DataFrame by importing the data into R. For example, if you stored the original data in a CSV file, you can simply import that data into R, and then assign it to a DataFrame. In my case, I stored the CSV file on my desktop, under the following path: C:\\Users\\Ron\\Desktop\\ MyData.csv

Tutorial, A data frame is a two dimensional heterogeneous data structure. It is important to note that R lacks data structures with 0 dimensionality therefore single numbers or strings are represented as a vector having length 1. For any object the str() command provides detailed information about the object. A vector is the basic data structure in R and

Whether it's a personal list of phone numbers, a contact list for an organization, or a collection of coins, Microsoft Excel has built-in tools to keep track of data and find specific information. This article applies to Excel 2019, Excel 2016, Excel 2013, Excel 2010, Excel for Mac, Excel for Android, and Excel Online.

Very often, statisticians are sent data files that contain all the information listed in Section 2, but either through inconvenient codebooks (e.g. written in Word tables that are hard to access electronically by other programs, leading to “cut and paste” coding which needlessly duplicates efforts) or data file columns headers.

You can see HRP1001 table above, SAP System will insert data with SCLAS = P ( Personel ) After you start new hire using transaction code PA40, but if you use function module HR_MAINTAIN_MASTERDATA new relationship in table HRP1001 not update automatically but you need to update manually.

Comments
  • A similar question was asked here
  • @daroczig Cool! Many thanks! If I may follow up: How would one modify your statement attr(x, "source") such that it prints both the attribute name (source) and the attribute value ("Ezekiel, M. (1930) Methods of Correlation Analysis. Wiley.") side by side and export to .csv file
  • One more thing, attr appears to be data frame specific not variable specific. Useful for adding a source, source media, filename and other attributes common to all variables in the data frame but not so obvious how to add variable specific label and notes without adding one attribute per variable (e.g. attr(x,"var1_label") <- "A label" and so on). Maybe that is ok...
  • @Fred: I added more details to my answer, I hope that could be helpful to you. As being an autodidact R learner with limited knowledge, my answer will not fulfill all your needs, but I hope that it could get you closer to the goal.
  • You can actually add attributes to each variable in a data frame: attr(x$var1, "foo") <- "label"
  • Thanks for UPDATE daroczig. This is very useful. It certainly helps me think in a more structured way about the sort of function I need and some possibilities.
  • there's now also the memisc which appears to implement just this: S4 classes for survey and codebook metadata.
  • Thanks! That merging will loose comment on dat() is to be expected, even desired: the merged data may have 2 different sources. One angle of attack is to approach it like melt: that is populating variable-level comments with data frame-level comments before merging. In other software I have atomized the documentation down to the observation level, which is useful when a record for a unit has been spliced or gap filled with records from other sources (but generally this is overkill).
  • Thanks! These are all good points. I don't document all my work so carefully, but for large collaborative projects, for publication, etc. it is useful. Some of my collaborators would not touch R with a 6ft pole. Having codebooks and data in flat files helps collaborative work. Finally, in Aremos I would document the raw data. Any data created from there is automatically labeled with the formula used to create it so you can always go back the chain see what you have (e.g. creating y <- x*z would create a label field for y that reads "y <- x*z".
  • That self documenting feature of Aremos is pretty neat. I didn't realize it did that. Everything old is new again! My answer was certainly not an answer to your question and was more of a "something to consider" comment. Thanks for taking it in that context.
  • I guess your approach might work better in combination with Sweave in the sense that any relevant comments in your code ought to be reflected in the final document. The advantage here is other collaborators don't need to read R script. The disadvantage is the Latex doc usually is prepared at the end of the process, whereas documentation starts from the beginning. So codebook + Sweave might be ideal (if laborious)...
  • BTW raw data files and the R script may work as replication files. But having replicated some published work I find authors only make available their final "analysis" database. Replicating the latter is nearly impossible as most original data providers don't have a version control system. Moreover, the analysis data is typically put together by some RA whose scripts are long lost. All the author has is the analysis script - with little info on where data came from - and data that is often poorly documented if at all. Not sure putting those two together counts as replication.