Econtalk podcast with Brian Nosek

I have a long commute to work. It takes me about 90 minutes each way if I am taking public transportation, about an hour if I take the car. There are some personal reasons on why I live so far away from work, but I am neither sad or stressed about it. Those 2-3 hours I spend on the road are wonderful times in which I either get caught up on my TED Talks, Jon Stewarts and Colbert Reports or in which I listen to podcasts when I’m in the car.

One of these podcasts is Econtalk with Russ Roberts. It deals with more than the pure money economics that is “Planet Money“, but is way more scientifically rigorous than “Freakonomics“, which is more entertainment than anything else. The podcast is quite libertarian, with a healthy does of skepticism towards the government, but it isn’t preachy or even biased. Russ does a good job of moderating the issues, he’s always well prepared and I like his insights into all aspects of economics. He’s made me be a believer of Hayek (and Mieses to some extent) and appreciate the nuances of Friedman’s policies. His podcasts with Mike Munger are entertaining, but some of the most conflicted I have been when hearing a podcast.

And because econtalk is about the social sciences in general, not just about finance, a few weeks ago he had Brian Nosek on to talk about his latest paper: Scientific Utopia: II – Restructuring Incentives and Practices to Promote Truth Over Publishability. I was very impressed with this podcast not only because of the interesting topic, but because I saw many traits of this among my astronomer friends, even though we are in the hard, natural sciences, not in the social sciences. If you can spare an hour, I implore you to listen to the podcast, I think it is very important to be aware of our unconscious biases when we are doing science and finding publishable results.

The main and salient point that Nosek raises is that we as scientists try to genuinely do good science but that does not mean that we aren’t vulnerable to some of the reasoning and realizations that leverage the incentives to be successful at science. Read: we need to make sure to get published, so we have a hidden bias to pursue publishable results (journals will not publish null results). I mean, if I get data at a telescope, I’m expected to publish a paper on that data. To massage some scientific insight out of there can often be quite difficult, but the incentive to publish is there.

Russ reads aloud two of his quotes and I think they are so good that they bear repeating:

The real problem is that that the incentives for publisheable results can often be at odds with the incentives for accurate results. This produces a conflict of interest. The conflict may increase the likelihood of design, analysis and reporting decisions that inflate the proportion of false results in the published literature.

Publishing is also the basis of a conflict of interest between personal interests and the objective of knowledge accumulation. The reason? Published and true are not synonmyms. To the extent that publishing itself is rewarded then it is in scientist’s personal interest to publish regardless of whether the published findings are true.

The sentence that published and true are not synonyms is the heart of the problem and it is a pretty depressing idea, too. We like to see ourselves as the truthseekers, but personal interests can derail this. Incentives matter, certain kinds of results are valued more than others. We are subtly influenced to take the path that is most beneficial to our career, i.e. seek out results that I can publish. But does that push our analysis in a certain way?

Nosek identifies 9 tricks or “things we do” where our bias shows:

  • – leverage chance by running many low powered studies than a few high powered ones.
  • – Uncritically dismiss failed studies as pilots or due to methodological flaws but uncritically accept successful studies as methodologically sound
  • – selectively report studies with positive results and not studies with negative results every time
  • – stop data collection as soon as a reliable effect is obtained
  • – continue data collection until a reliable effect is obtained
  • – include multiple independent or dependent variables and report the subset that worked
  • – Maintain flexibility in design and analytical models, including the attempt of a variety of data exclusion or transformation methods for to subset
  • – Report a discovery as if it had been the result of an exploratory test
  • – once a reliable test is obtained, do not do a direct replication; shame on them.

Now you may say, Tanya, that is all fine and dandy, but it only applies to the social sciences. Well, unfortunately no. I have seen the above over and over again in astronomy, too. When some observation does not agree with the model the scientists who proposed suddenly dismisses it, many projects are lying around in drawers, because “the discovery” didn’t pan out. But even worse, when an observation begins to even show signs of agreeing with the proposed, it gets sent to a prestigious journal, even if there isn’t critical statistical mass – hey it’s a pilot study, but it’s *really* important!

What do we do? Fix the journal / peer review system? Journals have own sets of incentives that encourage publications. They want attention and prestige, be at the forefront of innovations. So can’t do that. Should we maybe fix the university system? Can’t do that either, because here we face the same challenges. The way that universities gain prestige is the same as journals, so they don’t have the incentive to change either. Nosek suggests to rather start from the bottom up, from the practices of the single scientist to be accountable rather than from the top.

At the boundaries, we will have risk, we will be wrong a lot. That’s what science is supposed to do. But what happens is that by that design the discovery component suddenly outweighs the other side of science which is verification, which is kind of boring. Verification is just as important, but not as exciting! We value innovation more. Just last week this was my facebook status update:

“We had a big discussion at our conference dinner yesterday about science, its history and its impact. Yes, there was wine flowing :). One person in the group stated matter-of-factly “the biggest driver in science is discovery!”. Fortunately, I was not the only one who disagreed and vehemently at that. We need to get away from this thinking – science is incremental! It’s important that we assign glory to the people checking results, improving statistics, interpreting data, improving on some technique etc. as to the people who discover something “first”!” with a link to an ode to incremental research.

By the way, the person in the group is a brilliant scientist, I was rather perplexed by this attitude, but I am finding more and more, that it is prevalent among astronomy. We will face a big challenge is in trying to rebalance this. Can we find some incentives to make verification more interesting and provocative? It’s gonna be difficult. By the way, verification is not attacking the original scientist, disagreements over a subject does not mean that science is broken, that’s actually how science works! You gather some evidence here, somebody tries to disprove it there and then you converge on a result. Well, if that’s how science works, won’t it self correct? Isn’t then discovery the main driver still? Only eventually; it takes a very, very long time! That’s not ok! We can correct and clarify quicker and more efficiently.

So what are the suggestions to counter that thinking. In the podcast they delve into a nice anecdote about a study that was about people acting slower whenever they heard about old age. When the study couldn’t replicate, the original scholar just scowled: “well, you didn’t do it right!”. In the original paper they even reproduced the result, so obviously there was some methodology that was giving them those results. They are still talking about psychology, but even in astronomy, we have our way of doing things. What is a speck, some background variability to somebody is a Lyman alpha blob at a sigma of 2.5 at extremely high redshift to somebody else. Some people have great observing ability, positioning the slit exactly right, some are masters of getting that last little photon out of the data. But when is the science gotten from that photon reproducible? There is often so much subtlety and nuance! Failing to replicate also does not mean that the original result is false!

So the recipe is actually quite simple: be accountable! Describe each detail of the methods. I like the approach that people are putting their scripts now online via the VO or github (e.g. to really make their methods transparent and accessible. Some people provide diaries to their colleagues / collaborators on what they do (I know, I have and I find the ones of others extremely helpful as some sort of cookbook), if we could make this even more open it would be great – documenting your workflow is of real value! Every researcher does more research than he/she actually publishes. There may be diamonds in the rough there if all that data is open as opposed to the biased representation in the published literature.

This way of opening up methods also raises and could correct another thing: when you write a proposal, you already have the expectation of a confirmatory stance. So to correct for the strong expectation of a result you can present the tools you will use to analyze the data. This is why simulations of data in proposals are encouraged. When the data comes in, all you have to do is to run it through the already developed tools and just confirm or deny your hypothesis without fiddling, adjusting or even fudging the output. Register the analysis software in advance, it reduces your degrees of freedom, but makes you fairer, even if the data may not look as pretty. Better yet, have two competing camps work on an analysis script together!

Last, but not least, there is some fame and glory for shooting studies down. But only for the really famous ones. For the simple data analysis paper, we are often met hesitantly: “Why do you feel the need to question that result?”. But it actually is quite doable for high impact studies. This makes it so that one doesn’t constantly replicate things and will actually discover things, too, but it at least makes sure that the most “high impact” results are validated.

These are issues that have been of concern for years. Scientists don’t want to waste their time on things that aren’t true, so obviously they want to get at the heart of the problem. It’s actually great that people are looking critically at the research methods that scientists use. Even just knowing that we might have a skewed view in published results is valuable in it by itself. So if you made it this far, I have now made you aware of even another unconscious bias we carry around and need to account for. I leave you with a funny comic from xkcd, that a commenter on the podcast linked to – very relevant to the discussion.

Thinkshop 9 on Galaxy surveys using IFUs

My computer crashed! Now, with a new computer, I am quickly updating the blog posts which I had either written up or were in the pipeline. Please excuse the lateness…

Last week there was the 9th Thinkshop in Potsdam with the theme “Galaxy surveys using Integral Field Spectroscopy: Achievements and Opportunities“. These so-called thinkshops are yearly conferences that my institute, the Leibniz Institut für Astrophysik (AIP) organizes about once a year. They focus on a specific subject that is near and dear to a certain subject in science that the institute is doing and tries to get the best researchers in that subject to share the expertise on it.

This time it was about 3D Spectroscopy and IFU surveys focusing on galaxies and their evolution (the program also shows links to some of the powerpoint presentations that were given). Considering that my research group is called “3D Spectroscopy” you’d think that I would have been in the LOC, presenting 8 posters and have a long talk on the subject. But, nah! I didn’t even register. Part of it is, that the instrument I am working on and with which I will actually to 3D science (MUSE) is still under construction and will go on the Very Large Telescope mid 2014. The other part is that I feel that such a broad subject of galaxy surveys, even under the IFU umbrella, is too broad and the individual contribution would get lost in the fold.

However, I was thankful that I was able to attend two sessions and I was happy to do so. I want to focus this blog post on the subject of surveys, mainly because AGN/feedback might not be of that much interest to the broad audience, but do really feel that in the future with all these new instruments coming online, there might be a paradigm shift ocurring in which we do observational astronomy. I will talk a bit about the different talks that were presented that day. It was a bit overwhelming, so this blog post also helps me gather my thoughts on that session, trying to read up on it on the web. Times will be exciting in the future, for sure!

September 12th – Session: Projects and Surveys

I really liked this Wednesday morning session. While there was relatively little science being discussed when presenting these surveys, it is amazing what kind of statistically complete samples are coming up. We are truly moving away from gazing at our favorite handful of objects with an IFU and going toward large 1,000-10,000 galaxy surveys that try to be as unbiased as possible in their selection criteria beyond the capabilities of the instrument that define them (field of view, wavelength range, resolution, etc.).

CALIFA – Sebastian Sanchez

One of the biggest already undergoing surveys of about 800 nearby galaxies (z=0.005-0.03) selected from SDSS. Its observations are done with the PMAS in PPAK Wide Field mode with about 1700 spectra per galaxy. One of the thing that sets it apart is that it has no bias in the types of galaxies it is currently observing, as opposed to e.g. SAURON, which only observed ellipticals. The survey is undergoing and has observed about a third of the whole sample. The first data release is expected in November 2012. So this really constitutes a complete IFU survey of galaxies in the local Universe; and hey, who knows, maybe something new will come out, just like slow/fast rotator characteristic found in the SAURON galaxies. Let’s keep our eyes peeled for the early results!

SAMI – Julia Bryant

What is special about SAMI is that it uses hexabundles. These are optical fiber bundles with independent cores (here 61), with a >75% fill factor and <0.5% cross talk ratio. Each core as a 1.6″ diameter and the whole bundle then has a 15″ diameter. Currently there are 13 bundles which are placed with the 2dF connector, but in the future HECTOR will want to place hundreds of bundles – maybe a sort of 3-dimensional 2dF. The main science drivers of SAMI are very similar to CALIFA, only that they want to observe about 3000 galaxies over the next few years (they only just had their first light). While the resolution might be a bit coarse, I think the statistical nature of this huge sample will be truly impressive.

MANGA – Kevin Bundy

MANGA stands for “Mapping Nearby Galaxies at APO” in which they gather the individual BOSS fibers into a big fiber bundle. Well, we heard that CALIFA is doing about 800 galaxies, SAMI will do 3000, why not go for 10,000, right? As such, it would provide THE z=0 benchmark! However, this survey not is expected to go underway before sometime in 2014 and there are still enormous technical and characterization challenges to be overcome (e.g. biases in having an IFU of different sizes).

WEAVE – Reynier Peletier

no webpage

This was a really early presentation for the next generation IFU for the William Herschel Telescope. It aims to be the complement to 4MOST in the North. It would have about 1000 1.3″ fibers over a 2 degree field of view enabling it to do coarse IFU science. It is envisioned to work together with the blind HI APERITIF survey on large scale scienceSince the project is still in an early development phase many specifications were still open, but it was interesting to see what is in the pipeline for the future.

The DISK-MASS Survey – Marc Verheijen

This was a summary of the project started here at AIP on the Disk-Mass project trying to measure the mass in 40 spiral galaxies beyond simple rotation curves. It is because of this idea that the PPAK Wide Field Mode was installed at the PMAS instrument. IFU technology along with ancillary data such as HI maps were needed to disentangle all the different components contributing to the mass (gas, stars and dark matter). The science results are fairly uncontroversial, gas sigma remains constant, stellar sigma declines (exponentially for almost all) and dark matter profiles resemble NFW. Well, at least we’re glad the Universe is still in order.

HETDEX – Gary Hill

This is a huge survey that has been in preparation for years which will image the sky blindly. We (I am part of the science team) will expect to see a million Lyman Alpha Emitters, another million OII emitters plus another huge number of random other objects. I am actually very interested in the random AGN population found this way. The 150 VIRUS fiber bundles which are used for HETDEX work on the 3500-5500 Angstrom wavelength regime, enough to find significant Lyman Alpha Emitters to test principles of Dark Energy, which is the main driver for this survey. Unfortunately HETDEX has been plagued by delays, but we expect the survey to go underway in 2013/2014 and the Pilot Survey has already been quite successful. While MUSE is deep, HETDEX is wide!

HECTOR – Joss Bland-Hawthorn

no webpage

There wasn’t that much focus on HECTOR, the instrument, in this talk. It is basically an extension to SAMI with way more hexabundles carefully placed with a starbug positioning sytem (shown with a really neat movie, but I can’t find it on the web right now). Its ultimate goal will be to have a 3D view of 100k galaxies.

However, Joss really went into the philosophy of these huge surveys. He quoted T.H. Huxley – “The great tragedy of science – the slaying of a beautiful hypothesis by an ugly fact.” One can look at that in two ways, first that the great thing about science is that we just don’t believe in beautiful things, we believe in provable theories. But secondly that the fact that something weird happening in your favorite galaxy does not necessarily mean that something like Lambda CDM is suddenly disproved. In the Universe this means that one can just look at the big picture by looking at large number of objects. And the whole means doing huge surveys, not just 20-30 object surveys being done up until now with IFUs.

MUSE – Lutz Wisotzki

Considering that I am so entrenched in this project, it is hard to give an unbiased view on the presentation given here. The basic characteristics of the instrument were given: 300×300 cube covering the full optical range with a 2.5 Angstrom resolution with a one square arcminute field of view. But it is hard to really hit home what makes this instrument so special, how awesome it will be. When you are used to 8″x8″ fields, the enormous step to 1’x1′ is hard to fathom. People were fixated on the AO capabilities (it will have AO enhanced seeing with 4 LGS stars improving the seeing by double). But beyond that is the enormous cosmological volume the survey will cover. You will definitely hear me rave about the instrument some more in the future!

KMOS – Natasha Förster-Schreiber

KMOS will be the new near-IR IFU survey spectrograph installed on the VLT next year. It combines the MOS concept with the IFU concept in the near-IR and will replace SINFONI, which only covers one galaxy, to 24 mini-IFUs in a 7’x7′ field of view. Unfortunately the talk focused a lot on the science doen with SINFONI in the SINS survey, that there was little time to really go into KMOS, the instrument. However, the science goals for KMOS will be similar – do large surveys of relatively high redshift (z~2) galaxies.


Been testing a bit…

Almost a day late on this one, it isn’t really Monday anymore, but I totally overslept this one. Watching my daughter compliment my noodles with bolognese sauce tends to do that to you, turns me into a fool that forgets stuff she had proposed to do… 🙂

Anyway, this week I want to continue the theme of “University education”. I know, it might be boring, for those that are more interesting in travel (pictures, yay!), conferences (pictures of people, yay!) and gossip… err, news from the world of science, don’t worry, plenty of travel coming up for me in the next few weeks. Even as we speak a conference on integral field spectroscopy in galaxy surveys is happening right now, so I’m keeping my eyes and ears open for that one.

However, last week we had another 2 days of workshop relating to “Testing and Mentoring”. Combined with the fact that over the summer with a professor colleague and mentor we’ve been grading some oral presentations and testing some students one-on-one (well two-on-one to be precise), this was quite pertinent.

Yesterday we had two oral tests scheduled. These are usually 20-30 minute affairs. The subject for these particular tests related to basic astronomy concepts, we ask you about a subject for about 8-10 minutes. We go deeper and deeper with the questions until either the time is up or we find a barrier, where you might need some “hints” or “encouragement”. The theoretic construct behind it is quite similar to Bloom’s taxonomy of learning in the cognitive domain. In terms of didactics it is basic stuff and probably you have done it yourself subconsciously, but I wanted to put it up on the blog for me as a reminder, too. The table is by Donald Clark from this website: It also has links to the original research paper and further reading on the subject.


Example and Key Words (verbs)

Knowledge: Recall data or information. Examples: Recite a policy. Quote prices from memory to a customer. Knows the safety rules.

Key Words: defines, describes, identifies, knows, labels, lists, matches, names, outlines, recalls, recognizes, reproduces, selects, states.

Comprehension: Understand the meaning, translation, interpolation, and interpretation of instructions and problems. State a problem in one’s own words. Examples: Rewrites the principles of test writing. Explain in one’s own words the steps for performing a complex task. Translates an equation into a computer spreadsheet.

Key Words: comprehends, converts, defends, distinguishes, estimates, explains, extends, generalizes, gives an example, infers, interprets, paraphrases, predicts, rewrites, summarizes, translates.

Application: Use a concept in a new situation or unprompted use of an abstraction. Applies what was learned in the classroom into novel situations in the work place. Examples: Use a manual to calculate an employee’s vacation time. Apply laws of statistics to evaluate the reliability of a written test.

Key Words: applies, changes, computes, constructs, demonstrates, discovers, manipulates, modifies, operates, predicts, prepares, produces, relates, shows, solves, uses.

Analysis: Separates material or concepts into component parts so that its organizational structure may be understood. Distinguishes between facts and inferences. Examples: Troubleshoot a piece of equipment by using logical deduction. Recognize logical fallacies in reasoning. Gathers information from a department and selects the required tasks for training.

Key Words: analyzes, breaks down, compares, contrasts, diagrams, deconstructs, differentiates, discriminates, distinguishes, identifies, illustrates, infers, outlines, relates, selects, separates.

Synthesis: Builds a structure or pattern from diverse elements. Put parts together to form a whole, with emphasis on creating a new meaning or structure. Examples: Write a company operations or process manual. Design a machine to perform a specific task. Integrates training from several sources to solve a problem. Revises and process to improve the outcome.

Key Words: categorizes, combines, compiles, composes, creates, devises, designs, explains, generates, modifies, organizes, plans, rearranges, reconstructs, relates, reorganizes, revises, rewrites, summarizes, tells, writes.

Evaluation: Make judgments about the value of ideas or materials. Examples: Select the most effective solution. Hire the most qualified candidate. Explain and justify a new budget.

Key Words: appraises, compares, concludes, contrasts, criticizes, critiques, defends, describes, discriminates, evaluates, explains, interprets, justifies, relates, summarizes, supports.

So this taxonomy of learning is generally quite helpful in devising goals for your lecture, you might want to start at the very beginning which is pure knowledge base and let the students end up doing their own judgements about the current state of knowledge (uh… something like research 🙂 ). Here is an example of how that would work on something simple like Kepler’s laws.

Remembering: Cite the three laws, write down the formulas.

Understanding: Draw a picture with the planets relating to Kepler’s law. What is an ellipse? What is the semimajor axis? Where is the focus? What is meant by enclosed area?

Applying: Ask a simple extrapolation question or a small calculation. For example: if the velocity of a planet circularly orbiting the sun at a radius r is v, what would it be at 2r?

Analyzing: Extrapolate Kepler’s laws into other parts of astronomy. For example, Pluto’s mass was only calculated when its moon Charon was discovered in 1978. How? Why? Which one of Kepler’s laws would you use for that?

Evaluating: A bit more general, the student needs to gather knowledge from other lectures and/or formulas to make judgement calls. How are Kepler’s laws relevant in the motions of stars around the center of the galaxy? Why would they not necessarily apply? What needs to be considered for something to have “Keplerian motion”? What kind of rotation curve would you expect for a star having Keplerian motion around the center of the galaxy? What does it mean that galaxies show flat rotation curves?

Creating: Write a proposal on exactly the research on stellar motions in the center of our galaxy that the Andrea Ghez’ and Reinhard Genzel’s groups have been doing in calculating the mass of the black hole there.

Obviously the last two points would span too much time. They would be much more part of a homework, perhaps a final project – but there have been exceptional students that you can sort of lead into the thinking behind even in an oral examination. What is important is that you can see in this simple example how the taxonomy grows and how you find interesting questions to lead you through the oral exam without making it a simple question-answer game. This applies to written examinations as well – the trouble (for the examiner mostly, lol) comes when the first point is not there, but there is all sorts of knowledge about the latter points.

Anyway, now that you have the test all set up, you’re all prepared, you have trained yourself in the mannerisms of the presentation, you want to appear professional, but positive, you have a glass of water ready, some pen and paper – everything for the student and you to have a conversation, you need to grade the test.

Ah well, here is where the funny part comes. The first guy never showed up. At first we were worried he might be wandering the offices at University (we work in a research institute, we are only associated with the University for teaching duties, our offices are off-campus), but the secretary at University did not see him at the agreed-upon time either. The second woman was clearly an intelligent woman (since she followed logical arguments and actually deduced a satellite orbit during the exam), but she just didn’t study, like actually just sit down and think about the material. She was just happy that we gave her the lowest passing grade and off she went. This left me a bit sad, when I shouldn’t be. Everybody has a right to put certain classes on the backburner. Simply because one is intelligent one does not “owe it to the world” to succeed or do hard things. But still…

I also have to say that the workshop itself last week was utterly boring and the teacher was a bit incompetent. He brought a lot of knowledge to the table, but I felt he couldn’t apply it. It is weird, because it’s now the second week, I’ve complained about the lecturer on this blog, but the first training sessions relating to didactics at University were really good. I guess I was pampered and now have quite high standards. We also had the misfortune that due to scheduling conflicts our group turned out to be quite small and filled with teachers of the natural sciences (physicists, chemists, biologists, etc.). As such, some of us were actually yearning for more structure… more truths… I mean, yes we acknowledge there is personality in every subject, even in math, but the teacher as a psychologist was almost set out to invalidate any fact presented as absolute truth. Ah well, it’s not easy teaching scientists :D. I got a lot of great insights by my fellows, though. How they deal with things, certain situations familiar to all of us. Plus the anecdotes, oh the funny test answers (if you have any fun ones, please do share!), the logically impossible excuses (the fifth death of a grandmother in a semester).

It was a nice respite from my overly research driven activities around me at the moment, but alas, now it’s back to QUASARS!

E-Teaching / E-Learning

Last week I attended another round of teaching resource courses within the “Senior Teaching Professionals” program. The program is designed to help people in their early teaching phases, like postdocs and junior professors to become better and more efficient educators within today’s modern university. So far I have really enjoyed the program and the people attending. It is refreshing to meet people from all sorts of disciplines, but also grounding to hear that so many problems we have with students are similar. It also fills me with joy that they generally like teaching and that they also find the positive response of a few students such a gratifying reward that it makes them go on.

This week we talked about E-Learning and E-Teaching. While in the past the courses associated with the “Senior Teaching Professionals” have been top notch, I can’t say that with this course. Nevertheless it gave me a lot to think about and it even inspired some new ideas I might use in some lectures.

So let’s get quickly the things I didn’t like out of the way. The professor mentoring us this time was an old web pioneer, having worked on the development of HTML at CERN with Tim Berners Lee. Having spent his 15-year stint in the New Economy, he returned to academia a few years ago, where he now deals with the web in education. So for those amazing credentials his course was quite chaotic. There were a lot of buzz words, but not that much substance. Also some… well… shall I say, outdated concepts in my mind, like second life (seriously? My husband does actually own the T-shirt pictured on the right from the now defunct He did mention that not all e-tools are applicable for all lectures, e.g. a blog instead of a final written project is just not doable in subject like math and physics, where typing in the LaTeX formulas can be much larger hassles for beginning students that what the communication aspect is worth. However, I feel he should have shown successful blog examples instead of directing us to WordPress and encouraging us to “play around”.

Now, the stuff that I liked or that sparked some ideas in my mind:

– I was not yet acquainted with the MOODLE platform we have at our University, but we thoroughly introduced it and its functionalities during the first day. It offers ease of use, all the necessary tools preinstalled and automatic / immediate access for all students enrolled in your course. The drawbacks are that the interface is a bit outdated (no drag&drop for wikis e.g.) and well, that it is a closed environment in which nobody from outside will ever be able to access it. I think it is a powerful platform that can be immensely useful, but probably not one I will use in my class, since I don’t have any fears of, say, installing a forum or wiki platform on my homepage.

– The main point of the professor was communication. But in my mind it goes beyond that, it’s about accountability to your students. To give them the opportunity of acquiring the best information out there if they wish to do so and to give them the tools to do that. Be it in the simple stuff of putting your lectures online and linking to interesting and relevant articles to the idea of giving them the opportunity to confer amongst themselves when I don’t have the time to respond to every e-mail or inquiry.

– Lots of people have spent an enormous amount of time and energy in putting the resources out there and most of them are free. Now it’s up to you to figure out if they will be applicable – that is the challenge! In the days where udacity and coursera are perhaps shifting the paradigm of learning online, we need to remember to never among all blogs, wikis, forums, hangouts, etc. forget that the personal communication makes for the real “aha-moments”. So among all accountability don’t forget offline accessibility either.

What will I implement in the end? Beyond putting lectures, homework problems and solutions on the web? I think a summary in the type of a wiki for the lecture would be good, maybe with a comment function. That way the student can help point out a relevant article or something I missed, but he/she can also gripe about something. Please, connect with me, leave comments, write me on my various interfaces; I try to be as accessible as possible. I will probably NOT apply much of social network things beyond that, (un)fortunately our advanced astrophysics classes are small enough that I can invite you for a coffee for anything needing discussion beyond a quick comment blurb or link.