Experiential Learning and Retention

About one in four college freshmen leave their school do not return for their sophomore year. So why, when faced with such a big problem, would I decide to talk instead about the importance of experiential learning in college at the Annual Conference on the First-Year Experience?


Because one of the big reasons why some students decide to drop out of college is that they don’t see how it is relevant in their lives in the twenty-first century.  If you are paying a lot of money (as well as taking on student loans) and also think you are not getting anything out of it, you might very well decide to focus elsewhere.


The number one reason why students go to college is to get a better job. If you don’t think college is going to do that, then you look elsewhere.  And there are more alternatives to college now than ever before with the proliferation of online learning in many forms (some free) and bootcamps such as General Assembly that provide immersive training in specific areas that is designed to get you that better job in less time and for less money than college.



Experiential learning programs can bridge that gap. Students with internships or coops are much more likely to see the relationship between what they are learning in school and what they are doing in the workplace. The research shows that these students also have greater gains both in college and after they graduate. They also have stronger ties to their alma mater after graduation.


Given the importance of helping students obtain internships and coops to retention, success in school and the future, as well as stronger alumni ties, you’d think that colleges and universities would be putting lots of support into this process. Unfortunately, you’d be wrong.


It’s time that higher education leadership recognizes the importance of this work, especially as potential students move towards alternate pathways to the better job that they are so focused on.   

Work and Learn Programs Demonstrate Long-Lasting Effects

Evidence has been mounting to show the positive outcomes for college students engaging in experiential education programs such as internships or co-ops. Such experiences have been designated “high-impact practices” based upon extensive research using the National Survey of Student Engagement (NSSE) and one of the “big six” college experiences that lead toward higher levels of engagement in your job and wellbeing by the Gallup-Purdue Index. Time and time again work and learn programs demonstrate powerful and long-lasting effects.


But, how well are colleges facilitating such experiences for their students? This was the focus of a survey that I conducted with the support of ACT’s Center for Equity in Learning: Characteristics of Experiential Learning Services at U.S. Colleges and Universities.

There are a number of take-aways from this study, and I encourage you to read the (short) report, but in this blog I want to focus on support for working learners, a topic the Center has explored previously.

While most administrators told us in the survey that they provided a number of services and programs to undergraduates seeking a work and learn experience, there was a general lack of assistance specifically targeting how to be successful managing both school and work. Only a third reported offering personal counseling on how to balance school and work. Only a quarter reported that their office had extended office hours to accommodate working students. There was little support in helping working students balance academic deadlines. A logical conclusion is that we need to provide more help to working learners if they are going to be successful in school and career.

Many Career Services staff want to provide this assistance, but do not have the resources to do so. They need additional staff to both run these programs and to ensure that students see such resources as useful and supportive of their success. Many survey respondents told us that it just was not an institutional priority to further support this work. We must work to change this.

The mismatch here is that trustees, presidents, and provosts want and need the outcomes of such programs. They need to demonstrate that their graduates are going to be successful in their careers and other aspects of their lives. One way to do that, we know from research, is to promote the connection between learning in the classroom and the world outside the classroom. If college leaders align their institutional missions with greater support and funding for students to engage in experiential learning, everyone wins.

Average Student Debt is now $28,350

The Institute for College Access & Success (TICAS) released their annual report today on student debt, with the finding that the average loan amount upon graduation from a four-year college in 2017, for students with loans (65% of all graduates), was $28,350. Although from 1996 to 2012 we saw the average debt rise about 4% a year, this has slowed to where there was only a 1% increase from 2016 to 2017.

The TICAS figures are similar to ones collected with different methodology by the U.S. Department of Education’s National Postsecondary Student Aid Study, which calculated the average debt at $29,650 for 2016 graduates (the national study is only conducted every four years).

There is, however, significant differences depending on what institution the student attended, as one might expect. On the low end of institutional differences the average debt at graduation was $4,400 but on the high end students were graduating with an average debt of $58,000. It matters a great deal financially where one goes to school.

In addition to the loans taken out, in 2015-2016, on average students and their families paid $6,600 out of pocket on top of scholarships and grants received.

Organizations like TICAS help us understand the actual situation of student debt to help students and families go beyond the hype as well as assist policy makers who are trying to make a difference.

Evacuating Campus

As a number of college campuses are currently evacuating ahead of Hurricane Florence, I thought it might be useful to report on some findings from a study I did last year on the evacuation process at a residential college.

I surveyed undergraduates who had been under a mandatory evacuation from their college. The idea here was to gather information about the process with an eye towards improvement. As such, I asked about three phases of evacuating: 1) immediately after the decision to evacuate was communicated, 2) the evacuation itself, and 3) the return to campus life.

Here are some of the themes we saw.


Advance planning is key. Although the college had communicated with students about the likelihood of a hurricane, and had very good policies and procedures to follow, few students actually had plans in place before the actual threat was imminent. As one student wrote: “[I was] scared because I did not have a plan in place for where I was going to evacuate to.” One recommendation was to incorporate an actual mock evacuation plan into first-year orientation.

Communication is necessary at all stages. Students appreciated frequent communication. The most useful communications they received, they told us, where from parents and other family and the college. They prefered email communication over other forms.

Be flexible. As the path of a hurricane can vary quite a bit from predictions, and the situations one can encounter on the way (e.g., traffic, full hotels, gas lines, etc.) are also constantly changing, many students changed plans. Some changed en route, with 20% telling us that they ended up at a different place than they have planned to be when they left campus. Half of them ended up staying at more than one place for the week that they were evacuated.

Keeping up with academics was a big concern. The biggest concern students had throughout the process was about academics, with 62% telling us they were either “extremely” or “very” concerned about this. Internet connections were spotty, and the general level of anxiety about being evacuated and what would be left to return to combined to make it hard to focus on school work that was required during evacuation. This continued to be a concern once back on campus, as 42% told us it was hard to get back into classes after returning, despite the fact that many felt that faculty were accommodating.

All in all, many students learned from the process. As one student told us “I realized how much independence I had gained, I realized I had the ability to communicate, and to get along with people I had not known previously.”

The survey helped shed light on the process…the good and the not as good…and the impact of evacuating. Evacuating ahead of a hurricane is an experience full of anxiety, but managing the process can reduce that anxiety for all involved.

Major Dilemmas

A recent study on the timing of declaring your college major by the Education Advisory Board (EAB) indicates that changing your major as late as your senior year does not necessarily conflict with your ability to graduate on time, contrary to popular beliefs.  There is a nice article summarizing this on Inside Higher Ed for those who want the shorter version. 


We certainly need to know more about the impact of college major. When is the best time to have students declare a major? Early on makes it more likely that the student will be able to complete all the requirements on time. But forcing that choice too early can mean making a decision that is not all that well informed. In the EAB study, students who declared early and stuck with that major were slightly less likely than the late major changers to graduate. 

There are a lot of other things to consider here. First off, the study looked at when students officially declared their major, which might not be when they actually decided to switch and changed their course plan accordingly. It certainly could be the case that they just saved the paperwork until later in the game.

In looking at the impact of major change one also needs to take into account how drastic the change is. When I directed the Cooperative Institutional Research Program (CIRP) we looked at major changes and many of them were in the same related area: marine biology to biology, for instance. These are likely to share common courses that would could for either major, rather than, say, marine biology to art history.  The latter change is likely to need additional courses.

Then to throw another wrench into the picture, there are double majors, also a topic of recent examination. In another new study, students with double majors scored higher on a measure of innovation than those with only one major. The authors then tied this into research showing that innovation is a desirable trait that employers look for.

The impact of major is key to the college experience, and it's great that we are learning more about how to help students navigate this important decision. 


Assessing Wellbeing

I'm just back from NASPA's Annual Conference, where I presented with colleagues from Wake Forest University on how we have been creating a wellbeing assessment for college students. 

We've done several pilots of this instrument, and some of the findings include information that tells us wellbeing is a serious issue in college students: 

•75% felt unable to stop worrying

•54% felt depressed

•54% felt isolated

Wellbeing is a huge concept, and so we had to decide what to concentrate on in this assessment. There are three organizing principles that we have been using to craft the instrument.

  1. It needs to include areas of wellbeing that are developmentally appropriate for the institution. Wake Forest is predominantly 18-23 year olds, and so this is the age group we focussed on. This is not to say that they would not be appropriate for older students, but this was our focus. 
  2. The wellbeing components have a substantial body of research behind them. 
  3. The survey is actionable. We are only concerned here with information that provides guidance to practice and policy, so they must be components than can be changed through these methods.

Here is a link to the powerpoint we used. And here is a link to the project as a whole.

This is great work in a crucial area of the student experience. 

Losing Foreign Students

For the first time in 35 years, total enrollment of international students in U.S. colleges and universities has dropped, according to multiple sources using National Science Foundation and U.S. Department of Education data. 

As pointed out in the Brookings report cited above, one of the effects of this downward trend is a negative impact on college revenue. International students typically do not receive financial aid from the institutions in which they enroll, which means they pay full tuition in a time when that is not the norm. So, international students can provide much needed cash in an era when state appropriations for public institutions and high tuition discount rates at private institutions are eating away at the revenue stream. 

We saw a foreshadowing of this in the Chronicle Pricing Survey that I conducted with the Chronicle of Higher Education in the fall of 2017. Fifty-nine percent of presidents and chief financial officers at private not-for-profit colleges and 40% at public institutions told us they were either "extremely" or "very" concerned about potential federal policies resulting in a "diminished ability to attract international students." 

There are many benefits of enrolling international students besides tuition revenue. But the decrease in this source of revenue will need to be offset by an increase in other sources, and the one source that colleges and universities have the most control over is tuition. The decrease in enrollment of international students could mean an even larger increase in tuition for U.S. students. In an era when college tuition needs to go down and not up, the loss of international students is a move in the wrong direction.

Why Students Don't Graduate from College

"Too many college students gradua [sic]with six-figures of debt, wondering how they'll ever pay it off. If they had gotten good advice on the importance of taking enough credits to graduate quickly, they could have planned better and avoided unnecessary debt." -Tom Kean, New Jersey Senator

To remedy the situation as he sees it, Senator Kean proposes billboards and other marketing aimed at encouraging students to take 30 credits a year towards their degree rather than fewer than 30.  

Let's look at the premise, first. 

Are too many college students graduating from college with six figures of debt? Most of the data says no. According to the Federal Reserve Bank of New York, only five percent of students graduate with over $100,000 in student debt. Part of that, perhaps a large part, is debt from medical and law degrees, not college.

Many students do borrow money for college: about two out of three. The average debt that a student graduates with is $30,100, according to one of the best studies on this by the Institute for College Access & Success.  On a ten -year repayment schedule, that is $346.39/month.

So, not a lot of students are graduating with "six-figures of debt." There is widespread debt, but to a much lesser degree.

Let's turn to the next claim, that students are not graduating because they did not plan better and took too few courses. Most of the extensive studies on graduation will not say that students left college due to poor course planning. 

What are some of the reasons why students don't graduate?

Financial. Some leave because they cannot afford to pay tuition. Maybe their family contribution suffers from a lost job, a medical emergency, or a housing problem. In some of these cases the student not only cannot afford continued college costs, but might actually feel the need to move back in with the family to help with a crisis. Or, maybe to work and contribute to the family finances.

Another financial impact on college graduation is needing to work while in college to help pay the bills. Students who work many hours a week for pay are more likely to drop out. But there is another issue here with students who work long hours, which is that the work time cuts into academic time and social time. Students who work long hours can suffer academically. They also can form less of a connection with fellow students and feel less of a sense of belonging to the college. All these issues impact graduating.

There are also institutional characteristics that impact graduation. Research that I have done shows that institutions that devote fewer resources to student support tend to graduate fewer students. Barriers can be high ratios of faculty to students and of academic advisors to students.

I'll point out one more big problem: remediation. Many students graduate from high school ill prepared for college work. This can mean that the first year of college is spent retaking classes, such as English and math, that should have been mastered in high school. When a student who might already be feeling financial pressure to attend college sees that the whole year of "college" is really just a repeat of high school, he or she understandably has second thoughts. 

I applaud the New Jersey Senate for seeing that college graduation needs help. But does it need billboards? Probably not. More money needs to go towards state financial support. Two big ways to influence graduation would be to increase financial aid and to increase student support. 

I think that's a better way to solve the problem than to blame the student for poor planning.   

What Really Drives Tuition

We often hear, from faculty, higher education pundits, and just from random people on the street, that the high cost of college is driven by spending money on college administration. We've all seen the stories about the million dollar lazy river on campus! And while there are a few of those (not paid for by tuition dollars, though), this is hardly the norm.    

In the Chronicle Pricing Survey that I conducted with the Chronicle of Higher Education, we asked college presidents and chief financial officers how important various expenditures and revenue streams were in determining undergraduate tuition this past year.  

About three out of four (74%) college leaders told us that the "cost of faculty" was either "extremely" or "very important" in determining tuition. This was the expenditure of highest concern for them.  

The cost of administration was of concern to many fewer college leaders, with only 44% telling us this was an "extremely" or "very important" consideration in setting tuition.

So, while administrative costs have an impact, it is quite a bit lower than that of the faculty. This is probably as it should be. But not how the issue is perceived.  

College tuition is not largely driven by administrative costs. So the next time you hear someone say other wise, remember that you have the data on your side.

So, how would that work, for higher education professionals? I turned to my friend Gavin Henning, past president of ACPA and snappy dresser, for a consultation. He suggested looking at it from the lens of staff level.

For entry and mid-level, Gavin said, this "helps to provide a counter narrative to what we hear in the news." Sharing this data with colleagues helps to provide data to their experience.

For the mid-level to senior-level staff, this information can be used in budget discussions. Student-affairs professional have student learning at heart, and many students and alumni tell us that their learning was greatly enhanced by their experiences outside the classroom. Student affairs professionals can use this information to stave off budget raids when others think that "administrative bloat" needs to be curtailed. 

Data always contributes to the story if you use it wisely. 



The Current Practice of College Tuition Discounting is Not Sustainable

In my last blog, I wrote about the practice of tuition discounting, and how college presidents have a very different understanding of how applicants and their families view tuition than they actually do. Tuition discounting is the widespread practice of setting the “sticker price” of tuition at a high level, but then offering financial aid to discount what people actually pay.  Hardly anyone pays full price. The average discount is about half the price in private colleges and universities.

And the rate at which schools discount is increasing every year. In the last ten years, it has increased about a percentage point a year, from 38.6% in 2006-2007 to 49.1% in 2016-2017 (figures are from the very well done NACUBO Tuition Discounting Study).

Economically, this annual increase is a disaster for colleges and universities, since it means less tuition revenue per student.


The schools know this. In a survey that I created and conducted with The Chronicle of Higher Education, we found that four out of five college presidents and chief financial officers (CFOs) thought that the practice of tuition discounting was unsustainable. At private institutions, we found that among CFOs, the people most knowledgeable about institutional finances, 70% thought that tuition discounting was not sustainable at their own institution.

Clearly, something needs to change.

For more results on the College Pricing Survey, get the report from the Chronicle’s website.

The Tuition Pricing Crisis

We have a big problem with college tuition.  

Multiple analyses of college tuition have indicated that it has risen at rates higher than inflation, health care, housing, and a host of other items and services.

The inevitable question, then, is why is college so expensive?  And how are tuition prices determined?

Much of the research on tuition pricing looks at economic data, such as the relationship between tuition increases and decreases in state monies going to higher education. At the same time that we see public colleges receiving less money from the state, we also see that tuition at those institutions is rising. The conclusion then is that those responsible for setting tuition are making the decision to raise tuition to make up for decreasing state contributions.  

There are many other factors, however, that are more complicated.  And while the decisions around tuition pricing might be informed by economic factors, ultimately, they are made by people. 

I wanted to know what the people making the decisions were thinking.  

So, I worked with The Chronicle of Higher Education to develop a survey for college presidents and chief financial officers that would ask about their decision-making process. The results are available in a nice report on the Chronicle’s website.

I always think that when you have to solve a problem, it’s important to trace that problem back to assumptions that people have made.  They are not always right.  So, here’s a big disconnect we found.

College leaders’ assumptions about what prospective students and their families understand about college pricing are pretty much wrong.

Here’s the situation.  Many of you know this already, so bear with me for a minute.  

The real problem is that many people looking at colleges don’t know about this. 

Hardly anyone pays those high sticker prices for college tuition.

College leaders know that prices are too high for many to afford, and so after those high prices are published, the colleges throw a lot of financial aid at the problem.  A high sticker price is associated with quality.  Just like you assume that a $50 bottle of wine is much better than a $10 bottle of wine.  And it usually is, but not many people can spend $50 on a bottle of wine.  So, colleges discount tuition.  On average, they discount it by about half.  

That means that at a college with a sticker price of $30,000, the average amount that gets charged to students is $15,000 (of course it varies student by student and school by school, but this is about average).  

All of a sudden students and their families are paying only $15,000 a year for a $30,000 a year college education.  Great deal, right?  

Here’s the disconnect.

College leaders assume that prospective students and families know this, and take it into account when applying to schools.  We tested this in our survey and over half of college leaders thought people knew they would get a discount.  Three out of four college leaders at private not-for-profit institutions though that students knew they would not pay sticker price due to tuition discounts.  Many college leaders also thought sticker price would not cause prospective students from looking at a school.  

Other research indicates that this is just not the case, however.  A 2013 survey by Longmire and Company, Inc. indicated that: “Approximately 4 in 10 students and parents say they rejected colleges on the basis of their published sticker price alone. Six in ten say they are unaware that “colleges discount their published price so that incoming freshmen pay less than what is published.”   
This is a remarkable mismatch. Forty percent of potential students reject a school out of hand purely on sticker price.  Even more have no idea that tuition discounting ameliorates the sticker price.  This is a very large sector of the population that think they cannot afford an institution when it might be affordable.  The sticker price shocks them.  Yet the people setting that price think that what they are doing is well known.  

It’s not.  Obviously, this is one of the big problems with college tuition.  And there are more that are in the report I did with the Chronicle.  I’ll talk about a few more in subsequent blog posts, but you probably want to get that report.  It's got a lot of good information for college leaders as well as prospective students and their families. 

The Many Paths to an Education

Rainesford Stauffer "somewhat blindly" chose her college, she tells us in an opinion piece in The New York Times. When she arrived she seems to have done some of what we tell students to do to succeed. She joined clubs and took her studies seriously.  But she "struggled to conform to campus life."

She did not return after her first-year of college.  Unfortunately, this is not unusual, as about 4 out of 10 first-year students do the same. One in ten will go to another school that next fall.  That leaves 3 out of 10 trying to figure out what else to do. 

The young woman goes on to tell us about how she went to work, volunteered, and eventually obtained some college credit for what she learned in her experiences (what is called "prior learning credit') and graduated from college this past spring.  

But clearly there was a mismatch between her interest and that first college. And perhaps a key to that mismatch is applying "somewhat blindly" to that college.  This is too important a decision to do "somewhat blindly," but often times people make such decisions this way. Perhaps that is why recent research indicates that half of college alumni wish that they had gone to a different college, or had a different major, or got a different kind of degree. 

Rainesford felt like a failure when the expectation she (and others) had of her life did not come true.  But her story is really one of success, in which she finds joy in different careers and eventually gets that degree.  The sadness is that she felt like a failure.

What I want is for a few things to happen.  One is that we make it easier for potential students to pick a college that is right for them.  There is just too little information out there about what matters and how to pick a place that is right for you.  Another is that going straight from high school to college is not the only way to be successful in life.  

Take a year off and figure out what you want to do and why you want to do it.  A gap year can be a great experience that can focus your thoughts.  It is not just for the wealthy.  There are many ways to earn what you need during a gap year. 

Take local classes at a community college while working and taking time to figure out what you are interested in.

There are many paths to an education.  That is more true every day in this world.  More and more people are taking paths like Rainesford that involve combinations and working and learning.  And at 23, I bet she is not done yet.  

Higher education does not start, or stop, after graduating from high school.  Learning is a life-long and enriching activity.  I just signed up for my first online course this past weekend and am pretty excited about it.  

Education should be something that one is excited about.  If it's not, maybe that's a sign to try it another way.  There are many ways.







What is a Good Response Rate for Surveys?

"How can we get our survey response rates up?"

I'm asked this question a lot.  Everyone who conducts surveys wants more people to take their surveys.  But people, for the most part, don't seem to get excited about all the surveys we have for them.  With the ease of which a web survey can be created and implemented, more and more surveys are being launched.  And the more survey requests someone gets, the less likely it is that they will comply.  So response rates have been falling for years.  

There is an assumption that the higher the response rate is to the survey, the more accurate the results will be.  Oftentimes the first question I will get about a survey is about the response rate, because people have been conditioned to connect a low response rate with inaccurate results. This is most often used when someone does not like the results of the survey.  A low response rate lets people dismiss findings that they don't want to hear without actually having to deal with those findings. 

But (shhhh) nobody really knows what a good response rate is. 

I recently, however, came across a paper that addressed this in a great way.

"How Important are High Response Rates for College Surveys?" is a paper by Kevin Fosnacht, Shimon Sarraf, Elijah Howe, and Leah K. Peck all at Indiana University Bloomington that was published in The Review of Higher Education in the winter of 2017 (I've linked to a version of the paper you can access without a membership in any particular library source).  It is one of the best treatments of this issue I've read.

OK.  Now, think about web surveys.  You get an email asking you to take a survey.  Maybe you do that (thank you!), or maybe the email sits in your inbox and gets lower and lower until you forget about it.  A few days later you get another email asking you to take that survey.  But you are busy brushing your cat and there is hair everywhere and then after you clean that up there is that show you've been binge watching, and again you don't take the survey.  A few days later, there it is again, another reminder to take that survey.  But this time you aren't terribly busy with something else and it sounds kind of interesting, so you click on the link and take that fascinating survey.  

You've just gone from a non-respondent to a respondent.  If I hadn't sent you that second reminder (you knew it was me sending you that survey, didn't you? It's kind of what I do), you would not have taken it.  There is a direct relationship between the effort the researchers take to get people to respond (e.g., number of emails, offering incentives, leaving the survey open for longer times, etc.) and how many do respond. As the person administering the web survey, I also know when you completed it.  So I can tell if you did it right away after that first email (low effort), or if you only did it because I emailed you three times (high effort).

We can compare the results of the early respondents with those who are later respondents.  So, what are the results when we only had, say, 30% of the respondents answer?  Let's add in the next wave of respondents and see if the results change when we have 50% answering. This tells us if the results change (and maybe get more accurate?)  as a function of the response rates.  

The authors of this particular study took a large database of responses from multiple institutions to the National Survey of Student Engagement (NSSE) to test this and, using the timing of the response, simulated surveys with various different response rates.  They could then see what happened if a survey had, say a 10% versus a 90% response rate.  Which is very cool.  

And what they found was that response rate didn't really matter much.

The reliability of the results was pretty good whether you used a 5% response rate or a 75% response rate.  In fact, what mattered more was how many respondents you had, with smaller numbers being less reliable than larger numbers.  Their study showed that reliable data could be obtained with a sampling frame of 500 even when the response rate was as low as 5% to 10%.  Obviously the numbers of respondents that you get should also be influenced by the types of analyses you want to conduct, especially if you want to look at subgroups of students (for instance, men versus women).  

One study should not overly influence our practices, but the authors also provide evidence of similar findings from other researchers.  I definitely recommend that you read their paper, as there as nuances in there that a blog cannot capture.  But perhaps we are seeing a move away from the tyranny of the response rate.   


More on Parental Communication in the First Year of College

Devoted readers will recall that my last blog post was prompted by a paper by Sax & Weintraub on patterns of communication between first-year college students and their parents.  After I'd written it I realized one thing was nagging me about the results, and so I contacted my friend and colleague Linda Sax, who was one of the authors.  I was struck that the article did not discuss if there had been different patterns of communication with mothers and/or fathers for male and female students, and so I asked.

Linda told me that this was actually the second paper they had written from this set of data, and that indeed the first paper (which she kindly sent me) had published results showing that yes, indeed, there were differences.

As one might expect, communication while students were in school was most frequent between mothers and daughters, and then between mothers and sons.  For fathers, the pattern was reversed: men were slightly more likely to communicate with fathers than mothers.  Many students were satisfied with the amount of contact they had, although 46% of women and 33% of men wanted more contact with their fathers.

The desire to want more communication with fathers is associated with declines in emotional wellbeing from entering college until the spring of freshman year.  While it is difficult to use this data to definitely say that lower levels of communication with fathers leads to lower levels of emotional wellbeing, this is certainly a good possibility.  

In the other paper I wrote about by Sax & Weintraub we saw similar results with lower levels of communication with fathers associated with lower levels of academic adjustment. 

Again, as parents we need to remember that our job is not done once that last box is moved from the car to the dorm room.  Keeping lines of communication open is key in helping students be successful in that crucial first year of college. 

Parental Communication in the First Year of College

On a rare rainy day in Los Angeles, the latest issue of the Journal of the First-Year Experience & Students in Transition arrived via the U.S. Postal Service, and I took the opportunity to sit down and have a good read.  It turned out that my friend and colleague Linda Sax, from whom I had inherited the position of director of the Cooperative Institutional Research Program, had a paper in this issue with her collaborator, Danya Weintraub. In this paper they examined the role of parents in first-year students' college adjustment.


In this study, the authors used data from the CIRP Freshman Survey, filled out during orientation, and combined it with data from these same students on a follow-up survey the next spring.  All the students were attending a public university in the western United States. So the authors had two different points in time: at the beginning and towards the end of freshman year. 

The authors cited work I had done a few years ago, also with the CIRP Freshman Survey, with some questions I'd added on parental involvement in college-related decisions.  At the time, the "helicopter parent" was a popular term to describe parents that swooped down from above and intervened in their child's life.  The thinking was that this overparenting was creating young adults that were not getting the necessary experience with solving their own problems, as well as not being used to dealing with failure.  We were interested in finding out how students felt about their parents being involved in decisions such as where to go to college and what courses to take.  What we found was that to a large extent incoming freshmen thought their parents were involved the "right amount," and not too much or too little.

Getting back to Linda and Danya's article, they asked these freshmen, in the spring of their freshman year, about how often they communicated with their parents.  What they found was that students were in contact with mothers more than fathers, and that they mostly talked on the phone.  Texting was secondary to phone calls, and email was next.  This was in 2012, and my guess is that in 2016 texting might have taken over for phone calls.

While the more frequent communication with mothers was seen as just the right amount by 72% of students, the less frequent communication with fathers was just the right amount for only 55%.  One third, or 33%, of students reported that the level of communication with fathers was less that they would like.  

The authors then went a step further to see if parent-child interactions had a relationship with adjustment to college.  Those students who sought out their fathers for social and emotional support had stronger academic adjustment to college.  Those students who wanted more contact with their fathers than they had showed weaker academic adjustment. For fathers who are inclined to let mothers take the lead in communication, this is a wake up call.  We Dads need to be sure to keep lines of communication open.

Mothers are important too.  Academic adjustment was better for students who felt that they had high quality communication with their mothers.

When I dropped my daughter off at college this fall the school had a two-hour session for parents that was basically about letting go.  Parental communication was encouraged, but at the pace that the student initiates.  Not at helicopter parent rates.  I would add that it is important to check in with your student to make sure that they feel they are getting the support they need at the rate that they feel is best.    


What Parents Need To Know

I did something new this week.  Usually I speak to college faculty, administrators, researchers, or policy wonks about my research.  But having, once again, come out of the college admissions process alive and witnessing another excellent choice by one of my children, I had an idea to do something new.  Talk to parents of high-school students.

Choosing to go to college isn't easy.  There are a lot of choices and it is a big financial commitment.  There are also a lot of misperceptions out there.  I saw that in visiting colleges with my children and in talking with parents of their friends.  I've been in higher education for over 25 years and there were still things I didn't know.  So I decided to try and help, and spent about three months (off and on) putting together a talk that tries to convey some helpful information in what can be a time of great stress.   I want to reduce that stress for parents and students. 

 I approached Notre Dame High School's Counseling Department (both my son and daughter graduated from there and had excellent college counseling ) with the idea: would this be of interest?  Turns out it was, and on Tuesday night I spoke in front of a standing-room only crowd. 

I was interested in how well I predicted what would be interesting to parents.  Sure enough, the whole idea of tuition discounting was the one that shook the room.  I used the Department of Education's College Scorecard to demonstrate this.  Only a handful of parents knew of this resource.  One of the parents that came up to me after the talk told me he immediately got on his phone and looked up all the schools his child was interested in.

Here are the main categories I discussed:

Why go to college?
Does it matter where you go?
What does college really cost?
Will my child have crippling debt?
How do you do college right?

My goal with this is to make the process less stressful for parents, and then hopefully also less stressful for their children.  From what I heard Tuesday night, it might have worked.    

More On College Rankings

In today's Sunday New York Times, Frank Bruni has excellent advice for how one should use (and not use) college rankings.  

There are two schools of thought on rankings.  Those of us who have looked into how they are created are usually siding with Mr. Bruni: that 1) the premise is flawed to begin with and 2) no one ranking is going to be definitive for all prospective students.

I've written previously on how a multirank system that could be customized by the user would be a better system that the static rankings we have now.  There are a wide variety of rankings that all decide for the user what is important for them to look for in a college.  Having had two children go through the college search process, I can attest that there is no one size fits all.  As Mr. Bruni suggests, if one has to use rankings, then use several to illustrate different sides of a school.  For instance, if community service is of interest, use the Washington Monthly rankings to get an idea of some of the schools that foster such interests.

But, as I am also on record pointing out, the rankings aren't really used as much by prospective students, but more often by college presidents, trustees, and alumni.  The people creating the rankings know this, and some are not geared towards prospective students.

So refer to rankings if you need to.  But also do your own homework.  Rankings are like the old Cliffs Notes that would summarize plots of books for those students who didn't have the time or inclination to read the whole book.  You can get the gist of what happened, but you miss the nuances in the prose of what distinguishes a good book from a great book.  And don't we all want that great book?    

Learning About Contemplative Education

I was honored to have been asked to keynote at Naropa University's "Mindfulness, MOOCs and Money in Higher Education: Contemplative Possibilities and Promise" this past weekend at Naropa, bringing a higher education research viewpoint on how we are looking at new ways of defining and demonstrating success with our college graduates. 

It was indeed a delight to meet one of my fellow keynoter's, Laura Rendón, whose work I had long admired.  When I was director of CIRP, we took some of her validation concepts from the qualitative realm into the quantitative and created several validation constructs in the student surveys.

I came away with a new understanding of contemplative education, which encourages experiential learning and introspection in one's learning process.  I was struck how in many areas of education we see this in different ways.  In a sense, Sandy Astin's involvement theory is contemplative in nature, as it says that we get more from our education the more involved we are in it.  Work that I was involved in at Gallup showed the impact of internships that allowed one to connect that work environment and the academic learning in the classroom is surely a type of experiential learning.  Studies with the NSSE (National Survey of Student Engagement) also clearly show the benefits of being engaged in and out of the classroom.

And so it was the best type of experience.  I went there thinking I was going to teach them, but perhaps I ended up learning the most. 




Upcoming Presentations

I have two presentations this spring with colleagues from Wake Forest University on work concerning college student wellbeing.  

ACPA: Tuesday, March 8  The Engine Model for Understanding And Assessing Student Wellbeing 

College student wellbeing is a topic of great interest among student affairs professionals, faculty, and the general public, yet there is not a comprehensive understanding of college student wellbeing. The Wake Forest Wellbeing Assessment uses a new theoretic framework: the engine model of wellbeing. From this model a survey was created to measure wellbeing with the expressed intent to provide actionable information to program and policy for students. Results from the pilot in fall 2015 will be presented and discussed.

NASPA: Monday, March  Creating a Theory-Based Well-Being Assessment for Undergraduates

The Wake Forest Wellbeing Assessment helps inform program and policy changes that promote wellbeing. The presenters will describe a new integrative theory of well-being that is focused on applicable measures and results that can be impacted in a college setting, how this theory was translated into a student survey, describe the results of our first pilot administration of the survey, and use this information to drive audience discussion.  

Revisiting Limitations, Reliability and Validity of Large Database Research

I am often asked for a copy of these remarks I made as part of a presidential panel at the Association for the Study of Higher Education (ASHE) in 2011.  Here is an edited version of that talk.

I know many of you from the applied side of educational research: institutional research.  I don’t think there is anyone I know here from another hat I have worn in the past, which is as a cognitive psychologist and where I first learned the literature on the complications of trying to investigate any human phenomena.  Given the complications, it’s a wonder any of us try at all.

Yet we do.  I cannot imagine what an association for the study of higher education might do if it were not filled with people who, despite the complications, tried to gather comprehensive and systematic data from the many different types of institutions of higher education that we have.  So that we could assess, and attempt to improve the experience for all students who aspire to hang a college diploma or certificate on their wall.

So, I welcome debate on this topic.  Everything we do should be reliable and valid, at least, as reliable and valid as we can make it without being paralyzed by doubt.  I wonder if someone, back in 1965, had said to Sandy Astin, “you know, you better like this CIRP thing, because it is going to dominate your life for the next 40 years…” he would have had second thoughts about this grand plan to inform higher education on the impact that college has on students.

Yet we cannot become like the congress: so divided and convinced in our own certitude that we not only accomplish nothing but anger a lot of people who rely upon us.  I do not want to see educational researchers on the chart that is making the rounds now that shows approval of Congress at 11%, lower than Paris Hilton and BP during the oil spill. 

Let me go back to Sandy.  If there is anything I have learned in the six years of directing CIRP is that you cannot go wrong by referring to Sandy Astin.  CIRP was started to answer big questions.  Big questions that could not be answered by a study over here that had one institution with their questionnaire and a study over there at a very different institution that phrased things differently.  Big questions that cross-sectional design could not and would never answer.  Big questions that needed a lot of little questions to get at the Big answers.  If Sandy Astin had waited until those questions were absolutely perfect and everyone agreed on how perfect they were, well, there would be a big hole in higher education research these days.  And Pat Terenzini and Ernie Pascarella’s book would have only been six inches thick instead of seven.

But, as science does, we build upon the past.

What concerns me about some of the current debate is that it seems to me like some of the debate in congress, which has, as we know, not accomplished anything except making a lot of us really upset at them.  Telling schools not to do NSSE is not the answer. We should recognize the limitations in NSSE. And CIRP.  And every other study of higher education out there.  But also realize that we are in better shape because of how the research from these tools has informed higher education in general as well as hundreds of institutions.

As scholars, sometimes we forget that after all, the ultimate reason for this kind of work is to inform institutional change.  Not get published, not get grants, not get tenure, not get invited to conferences, but to actually have an impact on what our students get out of college.  Results from NSSE, and CIRP, and NCES, and Ernie and Pat’s work. National results get institutions talking about why and how they do what they do.  Having a local version of those results, like CIRP and NSSE offer, provides a great service to institutions that they cannot accomplish themselves.  We should be looking how to improve these tools, not completely tear them down.

As to reliability and validity.  I can tell you a few things about CIRP research.  I can tell you how we have looked at student self-report on things like GPA and SAT scores and find them highly accurate when compared to the actual scores.  I can tell you how we have good correlations of self-report measures of academic self-efficacy and subsequent performance on some standardized tests such as the California Critical Thinking Skills Test.  I could also tell you that I would love to do more of this kind of work. If anyone has a few million out there to help me with that, please come see me after the panel.

For every paper you can show me about how invalid self-report surveys are, I can show another that says they are valid.  The key thing is not a blanket statement that people cannot remember what they did last week and so we should stop asking, but in crafting questions that allow them to answer in a way that they can answer with a certain degree of validity.  But remember, even in physics, there is uncertainty. 

Let’s look at a typical CIRP question that asks for recall. When asking incoming first-year students to reflect on the past year and indicate how often they, for instance, were bored in class, students are given only three response options: “frequently,” “occasionally,” or “not at all.”  Fairly straightforward in themselves as qualifiers, the instructions also specify: “if you engaged in an activity one or more times, but not frequently, mark ‘occasionally’ and go on to tell students to mark ‘not at all’ “if you have not performed the activity during the past year.”  The wording of these questions provides sufficient direction to respondents and not enough latitude to waver off into vagueness.

I have personally administered this questionnaire to thousands of students over many years.  In the room with them, offering to answer questions.  They had questions, but they were more like “I just got my student ID and I don’t remember the number, what should I do?” and “why do they ask these questions?” and “can I go to the bathroom?”  Nobody ever asked me what this section of the questionnaire meant. 

Even so, what level of specificity in results do we really need in order to provide useful information and how should we interpret results?  I am sure that all of you were as eager as I was to get up yesterday morning and read about the new NSSE results.  There is important information in there about a number of things, but let’s take majors.  One of the findings the media picked up on was that on average, engineering majors spent more hours studying than business or, pause for effect, social science majors. To be more specific, engineering students studied an average of 19 hours a week and education majors an average of 15 hours a week.  Do I believe that engineering majors tend to spend more time studying than education majors?  Sure.  Do I for a minute think that the population value if we had a perfect way of recording hours spent studying (putting aside the Heisenberg Uncertainty Principle for a moment which of course tells us that this is never going to happen) that the final all-knowing results would be 19 hours?  Not at all.

But the important piece of information here is the relationship between the groups.  That we can get without putting all the engineers in a box with Schroedinger’s cat.

Another important recognition here is that some questions just cannot be compared against outside standards.  Respondent opinions, perceptions, values, and beliefs about themselves are important aspects of their every day experiences, and have value.  Certainly the questions should be crafted with care, by people familiar with all the potential sources of bias that can impact results.  But, just because perceptions and values are not easily verified, does not mean that they are not important or reliable in predicting student achievement. 

There are scores of studies that examine the connection between perceived campus climate and outcome measures, such as graduation, that are backed up by triangulating with observations and interview studies.  This is why it is a common practice in research to also use multiple questions that examine the same trait from different perspectives to create constructs that attempt to describe a phenomenon.  More sophisticated methods, such as how we at HERI are using item response theory in creating constructs, also have moved the field forward in this regard.

There is a very rich body of literature on survey questions. I encourage those of you interested in this topic to attend the annual conference of the American Association of Public Opinion Research.  They are way ahead of us in the area of survey methodology.  Don Dillman’s work, in particular, is masterful.  These people live and breathe the impact of question wording, response options and even horizontal or vertical formatting of the responses.  But they all believe that if we apply what we know about creating questionnaires, we can effectively utilize questionnaires to collect useful data.

So, in summary, what do I believe?

1)    There are important questions that only large scale surveys can answer.

2)    As with any line of research, there is uncertainty.  We need to recognize it and acknowledge it in interpreting our results.

3)    We need to always move forward in making our measures better.

Some of you might have heard this quote from George Elliot.  We can perhaps let the sexism slide a bit, since George really was Mary Anne, and hope that were she writing today she would do so under her own name and without the male pronoun:

The important work of moving the world forward does not wait to be done by perfect men.

Of course, maybe she was thinking that it would be the perfect women that did it, right?

Let us by all means strive to be perfect. But let us not let our failings in that area mean that we do not continue to try ourselves and support those who battle beside us.