Archive for the 'Discovery Informatics' Category



The End of Summer (Part I)

The first summer session is over. I have some slight command of the Spanish language, knowing about 200 words, have some rudimentary understanding of grammar, and possess a slight ability to discern words in a conversation if spoken slowly enough. I assume that I passed the first course, based on the e-mail from my professor who wrote that I “did a good exam”, and also based on the fact that, so far, I am still enrolled in the second course that begins Tuesday (that is, I have not received a communication from the school telling me that I can’t take the second course).

The mathematical sabbatical has been a very good idea indeed. The ‘A’ that I expect from the first session will certainly help the GPA regain some of its lost value, and there is a reasonable expectation of a similar result in part 2. Plus, the realization that I can still memorize material relatively quickly is an enormous confidence booster for the expected rigors of Biology that await in the Fall term. The brain still works, if not in an abstract manner.

Staying in the Spanish milieu for this post, here is a representation of how I felt at the end of the Spring semester:

I was being gored, tossed about like a rag doll, and receiving absolutely no respect from any of my courses……..

Today, with an all but certain victory in a class, and another likely to follow, my state of mind can best be expressed with this image:

¡patear el culo y algunos nombres teniendo!

A little confidence is a great thing……….

Advertisements

The Scientific Method Pushes Back…..

In my last post, here, I linked to a very interesting article by Chris Anderson, of Wired Magazine. Anderson posited that Google is fundamentally changing science and the scientific method.

Well, it didn’t take long for the scientific community to weigh in on the issue:

From Ars Technica, the other side of the argument:

Every so often, someone (generally not a practicing scientist) suggests that it’s time to replace science with something better. The desire often seems to be a product of either an exaggerated sense of the potential of new approaches, or a lack of understanding of what’s actually going on in the world of science. This week’s version, which comes courtesy of Chris Anderson, the Editor-in-Chief of Wired, manages to combine both of these features in suggesting that the advent of a cloud of scientific data may free us from the need to use the standard scientific method.

…Overall, the foundation of the argument for a replacement for science is correct: the data cloud is changing science, and leaving us in many cases with a Google-level understanding of the connections between things. Where Anderson stumbles is in his conclusions about what this means for science. The fact is that we couldn’t have even reached this Google-level understanding without the models and mechanisms that he suggests are doomed to irrelevance. But, more importantly, nobody, including Anderson himself if he had thought about it, should be happy with stopping at this level of understanding of the natural world.

Obviously, there is a lot more, so follow the link for the full post.

I’m not a scientist, I’m a student. Nevertheless, it is fascinating to see the dynamics of conflicting viewpoints that arise from the inevitable conflicts between orthodoxy and revolution. I suspect that the way forward in this discussion will bring us to a harmonic convergence of new research methods and a revision to the hallowed Scientific Method.

Correlative Analytics♠

Once again, Kevin Kelly explains the intersection of computer science, mathematics, large datasets, and science in a way that few can. The link will take you to the entire post, but these juicy tidbits are here to tease:

There’s a dawning sense that extremely large databases of information, starting in the petabyte level, could change how we learn things. The traditional way of doing science entails constructing a hypothesis to match observed data or to solicit new data. Here’s a bunch of observations; what theory explains the data sufficiently so that we can predict the next observation?…

In a cover article in Wired this month Chris Anderson explores the idea that perhaps you could do science without having theories.

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

There may be something to this observation. Many sciences such as astronomy, physics, genomics, linguistics, and geology are generating extremely huge datasets and constant streams of data in the petabyte level today. They’ll be in the exabyte level in a decade. Using old fashioned “machine learning,” computers can extract patterns in this ocean of data that no human could ever possibly detect. These patterns are correlations. They may or may not be causative, but we can learn new things. Therefore they accomplish what science does, although not in the traditional manner…

My guess is that this emerging method will be one additional tool in the evolution of the scientific method. It will not replace any current methods (sorry, no end of science!) but will compliment established theory-driven science. Let’s call this data intensive approach to problem solving Correlative Analytics…

Perhaps understanding and answers are overrated. “The problem with computers,” Pablo Picasso is rumored to have said, “is that they only give you answers.”  These huge data-driven correlative systems will give us lots of answers — good answers — but that is all they will give us. That’s what the OneComputer does —  gives us good answers. In the coming world of cloud computing perfectly good answers will become a commodity. The real value of the rest of science then becomes asking good questions…

This is the clearest expression yet of what I think the Discovery Informatics degree at my school can offer to those interested in these emerging fields. And remember, where science leads, business opportunities follow closely behind. There is much to be done…………….

Espanol

Well, we’re three days into the introductory course for Spanish. I’m taking the advice of one of my advisers, who counseled that I get the language requirement out of the way as soon as possible; for him, therefore me, that meant starting this summer. Given that the school only offers three courses at the introductory level in summer classes, I had to choose between Russian, German, and Spanish. What would you have done in my shoes? I thought so…..

Gone are the quixotic notions of learning some slightly romantic, or relatively obscure, language, being able to converse (?!) with about 2 other people in a 100 mile radius, or, more likely, having to bust my hump to “learn” some language while I am trying to take yet another unbelievably difficult math or computer science course. No Farsi, or Hindu, or Chinese, or even (sob) Latin, which still holds some small fascination for me after a brief exposure 45 years ago. I guess time does heal old wounds…..

No, for me, it’s just Spanish. Which, after these 3 days, looks like it’s going to be plenty challenging. I’ve already memorized about 50 words, and it appears that our professor expects the pace of memorization to pick up as we go along. It’s nose to the grindstone, again, and I’m okay with that.

At least for now, the challenge is not understanding abstract concepts and complex rules/principles. It’s just plain old hard memorization, and, frankly, I welcome the change.

This will be good practice for Biology in the Fall.

Hasta Pronto…..

Future Computing

My major, Discovery Informatics, is, I hope and believe, the future of computing. A hybrid kind of major, encompassing programming skills, mathematics and statistics, and a cognate (an area of specialization), the acquired skills should enable a graduate to apply the skill-set to a variety of disciplines.

As someone that spent the better part of his working life in business, it makes sense to think that I can return to that area, ready to contribute (and earn) in a new, meaningful, and interesting way to the corporate weal.

Articles like this provide encouragement that this bold move may yet pay off in the near term:

Workplace social networks and cloud computing means that the need for a centralized IT department will go away. Firms will no longer need to own/maintain the boxes that they use to run their firm’s apps. With no need to touch a box, there will be no need to have the IT staff co-located with the boxes. Oh, oh — can you hear your job going away?

What does this all mean, and more importantly what should a successful IT staffer (or CIO) do today? The key to your future success is to understand how IT is going to change and what you need to do to change with it. IT is going to become much more about information and how it can be used to help the business grow and prosper. This IT function is going to leave the IT department as we know it today and will migrate into the business unit itself. What this means to you is that you need to know what your firm does, and even more importantly, how it does it. The next question will be what information is needed by the business units to improve how they do their work. This is what tomorrow’s IT staff will provide. Thanks Gartner for the peek into the future!

Can you dig it? I can………

AfterMath and Longing

One week after the last exam of the semester, I have translated to a new milieu. Early (well, earlier) to bed, later to rise, a full read of the local and the WSJ, and then onto a few hours dedicated to the study of Java. Lunch, domestic duties, and before you know it, it’s supper time. Afterwards, reading of the enjoyable kind.

I know this interlude will be brief.

Today, after a nice lunch downtown with my bride and a brief spin through some specialty shops, we detoured through the CofC on our way home. The Cistern…..

is ready for the graduating seniors to “walk” for their hard earned degrees. The setting is beautiful, dignified, and reeks of academic ambience.

I can’t wait for my turn…….

Strange Feeling

It’s kind of strange right now. It’s Sunday afternoon and I am not studying, or not at least feeling guilty about not studying. For the past year, with a few weeks off for various vacations, I have most of every Sunday on schoolwork. It’s kind of strange right now…….


“Life’s hard, son. It’s harder when you’re stupid.” — The Duke.

Education is a companion which no misfortune can depress, no crime can destroy, no enemy can alienate,no despotism can enslave. At home, a friend, abroad, an introduction, in solitude a solace and in society an ornament.It chastens vice, it guides virtue, it gives at once grace and government to genius. Without it, what is man? A splendid slave, a reasoning savage. - Joseph Addison
The term informavore (also spelled informivore) characterizes an organism that consumes information. It is meant to be a description of human behavior in modern information society, in comparison to omnivore, as a description of humans consuming food. George A. Miller [1] coined the term in 1983 as an analogy to how organisms survive by consuming negative entropy (as suggested by Erwin Schrödinger [2]). Miller states, "Just as the body survives by ingesting negative entropy, so the mind survives by ingesting information. In a very general sense, all higher organisms are informavores." - Wikipedia

Blog Stats

  • 30,715 hits
Advertisements