Wednesday, July 31, 2013

MOOCs and Missing the Point


I am not a fan of MOOCs, especially as the next step in higher education. My latest test is that I am finishing a course taught by some Aussie. I diligently followed all the lectures through number 4. It was a frustrating waste. So were the handouts. So I changed strategies. All I did was look at the exam for the week and then do what I would normally do, search the literature and prepare an answer. Instant best grade and no wast of time trying to understand Aussie or the lecture notes which rambled.

So perhaps all we need to do is set out exams. Let the students find out how to answer the questions and then you have solved the problem? Yes? No. You have not shown the student the other ways of thinking. You just reinforce the way they do it now, which may not be bad, but the Ah Ha moments all too often do not come from such intellectual inbreeding.

Now the Chronicle of Higher Education notes:

Last year, a former Princeton University president, William G. Bowen, delivered the Tanner Lectures at Stanford, continuing a long tradition of college leaders' using the top floors of the ivory tower to speak difficult truths about academe.

When the dot-com craze was sweeping the nation, back in 2000, Bowen—an author in the 1960s of the original "cost disease" diagnosis of labor-intensive industries­—kept his eyes on the evidence. He didn't yet see reason to believe that colleges could use technology to save money. But another decade of progress changed his mind. "I am today a convert," he said. (The lectures were published this year by Princeton University Press as Higher Education in the Digital Age.) Bowen's random-trial-based research suggests that "online learning, in many of its manifestations, can lead to at least comparable learning outcomes relative to face-to-face instruction at a lower cost."

I would suspect the good former president may never have taken one of these courses. There is nothing more important than the interaction in a class where one gets a glimpse of a different direction. MOOCs for the most part are canned lectures with no feedback, no way to ask a question, other than a time delayed discussion forum, which all too often become an ersatz Facebook blather page. One can evaluate the value of a posting simply by seeing if the person identifies themselves. All too often they are Anonymous.

Thus unlike the esteemed president, I am totally unimpressed. However, the questions were good and helped focu
s me on questions I would not have asked. However, many of the questions I would never have asked anyhow!

Tuesday, July 30, 2013

Amazon Shills

There are certain books which attract reviewers who push one side or another. The Crawford book on cable companies is a prime example. The one star reviews are clearly cable shills. The five star reviews are clearly the Progressive left wing lobby who want the Government to control everything, that is until they do not control the Government.


In Tech Dirt there is a somewhat reasonable albeit poorly researched and a bit one sided piece commenting on the one star reviews. The negative hits on my review were clearly the result of the Tech Dirt article. There is a cadre of left wing (wing-nuts) who believe that the Government should supply everyone with 1 Tera bps link no matter where they are at no cost. Frankly these are the folks who may in my opinion represent the core that leads to destruction of the United States.

The Tech Dirt article states:

Astroturfing -- the process of a faux "grassroots" effort, often set up by cynical and soulless DC lobbyists pretending to create a "grassroots" campaign around some subject -- is certainly nothing new. It's been around for quite some time, and it's rarely successful. Most people can sniff out an astroturfing campaign a mile away because it lacks all the hallmarks of authenticity. A separate nefarious practice is fake Amazon reviews -- which have also been around for ages -- amusingly revealed when Amazon once accidentally reassociated real names with reviewers' names to show authors giving themselves great reviews. Over time, Amazon has tried to crack down on the practice, but it's not easy.

So what happens when you combine incompetent astroturfing and fake Amazon reviews? Check out the reviews on Susan Crawford's book, Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. Now, I should be clear: while I respect Crawford quite a bit, and often find her arguments compelling and interesting, I found Captive Audience to go a bit too far at points, and felt that the book lost a lot of its persuasive power in really overstating the case. We agree that the broadband market is not even remotely competitive, but we disagree on the solution to that. Still, I think the book is very much worth reading, and an important contribution to the discussion on broadband/telco policy. 


Now from a previous post I reviewed the book. The review was up for some four months and generally got 80-90% useful indications. Then someone must have read it in the pro Crawford or pro-Cable camp and they went to work giving it dozens of negative help indications in hours. It is clear that the action was coordinated and deliberate to cram down the review.

This book in my opinion is of poor quality. I indicated as such why I felt it to be so. I am no fan of the cable companies, Comcast in particular. I was a Group President at Warner Cable a few decades ago so I have had a seat on both sides. I was also a Senior VP at NYNEX and now Verizon so I understand the telcos, and I also in my own company built one of the largest fiber networks in the world. But expertise seems to have no value in the cyber world of game playing. The anonymous reviewers and critics are useless shills in the world of information.

Attacks on Amazon Reviews: The Progressive Brown Shirts

Crawford wrote a book entitled, Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. This is a difficult book to review. Not because of some of its conclusions, which I am hearty agreement with based upon hands on experience, but because the author all too often steps well outside her area of expertise and opines on things which are just wrong.

Before continuing there is one other observation that it appears that anyone who disagrees with this author seems to get slammed with unhelpful reviews in what appears to be a blatant attempt to push any negatives out of view of a potential reader. I initially noticed this happened to this review within the first few hours. Thus the reader should be advised as regards to the comments. Just an observation. After a few months of positive reviews I have noticed the dozens of negative hits coming fast and furious, and rather than commenting on facts this is a way to push the facts off the table. Perhaps one should examine the historical precedents for this technique and thus weigh the votes. After an deliberate attack by the book's followers on Amazon I removed the book review and let it stand here.

However her conclusions are of merit despite, in my opinion, the confusion of her argument.

1. "Cable Companies are bad". She has some very valid points here as she demonstrates through the vehicle of the Comcast and NBC merger. She argues that such a merger should never have happened. One could provide substantial grounds for preventing it, most on antitrust issues, but they were never truly approached by the current administration. The reasons why is a tale unto itself.

2. "Fiber is the only way." Here I argue she is clearly wrong and is so since she does not understand the technology. Since this is a key point in her argument one wonders why she did not at least reach out to find better support and understanding.

3. "Government is the best market regulator." This is an extreme position which has mixed, at best, results. In fact it has been clear in the technology space that the Government is the worst regulator.

Let me address the above points and relate them to the text itself:

1. Wireless has substantially more capacity than the author understands.

2. The cost of fiber is dominated by local costs such as franchise acquisition, costs of pole attachments, and the delay costs of laying the fiber.

3. There exists a significant body of law the antitrust laws which can and should be used to manage the industry not just regulation.

4. Cable companies are monopolies for the most part and should be regulated as such.

Let me now provide some details contained within the book specifically supporting the above:

On p 78 the author speaks of the abandonment by Verizon of fiber deployment. Why did Verizon abandon its buildout? Frankly there are two. First there were the exploding legal costs and delays in getting local franchises. These were exasperated by the local cable companies but facilitated by the local towns who often did not understand the economics of competition, they just asked for more as they were advised by the incumbent cable operators. Second, and this is a critical element, was the success of wireless in expanding bandwidth efficiency. Namely with only a 1 bps/Hz a decade earlier they now were at almost 10 bps/Hz and they could see ultimately even another order of magnitude increase. This focus on wireless was most evident with the succession to the CEO position with the wireless head taking the helm. Thus it was to some degree a problem with the incumbent but it also was an understanding that the wireless alternative was more than viable.

On p 90 there is an interesting discussion regarding the "interstate access charges". In fact these were the interconnect fees. The author refers to the Prodigy effort, but such an effort was doomed from the start by the massive overhead put on it by IBM, yet at the same time they were facing the overhead of AT&T. The access charge issue is a simple one. There were local phone companies and long distance ones, at that time.

The local companies demanded and received a fee for interconnecting to the local company. Even though the local companies were separately paid by the customer, they were allowed by the FCC to impose this charge to third parties such as an AOL or Prodigy. Fortunately the FCC abandoned this stance. The author seems to have not fully understood this issue.

On p 95 the author tries to outline the history of on line capabilities using the AOL and Time Warner as an example. In fact it began in 1978 with Warner Cable and the QUBE system. This was the first two way cable system that allowed interaction and online purchasing. This Warner, and perforce Time Warner, had been developing this for almost two decades. In the early 1980s Warner Cable developed the first "Electronic Shopping Mall" a two way video on demand d system in a joint venture between Warner, Bank of America and GTE, with Bell Atlantic and DEC participating. That effort collapsed when Warner ran into financial difficulties. Chase Bank and others did the same during the videotex period. The author appears to posit this sudden even with Time Warner and AOL when in reality there had been many trials, tests, and attempts.

On p 125 the author states that Edison invented the telegraph. What of Morse? Perhaps some fact checking of simple facts would help.

On p 129 and on the author refers to Sen Franken so many times one wonders why? The book was not written by Franken and based upon his public record he was both new and definitely not an expert in regulatory issues and technology. This continual referencing becomes a distraction.

On p 133 there is a discussion of the new channels being cash cows. However there is a very serious issue here. The cable companies bundle up packages of programs which they also own and demand that anyone providing one provide the full package and at premium prices. The consumer gets the full sports package and pays for it no matter if they have ever seen a single sports event. This is the major failing of the FCC and the FTC. Legally this is akin to bundling, a practice clearly prohibited by the antitrust laws. But to data the DoJ has never acted upon this, nor has the FTC.

On p 156 on the author delves into the cable versus wireless issue and here she is well out of her depth. It is a pity because this area is a significant one for possibilities. Let me first outline the wireless argument and then return to the text:

1. Wireless capacity can be measured by the number of bits per second, bps; it can deliver to a user.

2. The user demands bps depending on the application and the number of them the user may have. For example HDTV had been a bid user of bandwidth.

3. Now two things have occurred technically over the past ten years. First bandwidth efficiency, measured in bps/HZ, has increased from 1 bps/Hz to now 10 bps/Hz. Yet at the same time the data rate required for video has collapsed, going from 100 Mbps down to 4 Mbps. Thus supply, that is bps/Hz, has exceeded the demand, such as Mbps. Namely we can now easily use wireless for HDTV.

4. The acquisition of bandwidth by the wireless companies has continued and now provides almost universal service. Wireless does not require franchises or pole attachments, and can be delivered in a short order.

5. Wireless efficiency now at 10 bps/Hz is anticipated to increase to 100 bps/Hz. That means that a 20 MHz spectrum could provide a 2 Gbps channel to a single user, and with multibeam antennas it can to so to a plethora of users. This backs up directly to a competitor of fiber. And at a tenth the cost!

On p 160 the author again reinforces her lack of technical understanding and capabilities. She states:

"When people want to download a lot of data, say to make a video call, they overwhelmingly opt for high speed wired connections."

Perhaps she has not been made aware of the iPad.

This distortion continues throughout this chapter. She does it again on p. 161,

On p 251 she states:

"Will wireless help America reach the president's goal of one gigabit to every community? No."

The answer is yes, and since the wireless companies have hundreds of MHz not the 20 above, they can well exceed that.

On p 258 she describes the franchises being exclusive. In fact almost all were no-exclusive. The problem was the cost of overbuild.

On p 263 she demands "For starters most Americans should have access to reasonable priced 1-Gb symmetric." Now I assume she means 1 Gbps not 1 Gb. it is rate not totality, and she

On p 265 she begins her argument of moving to a utility model. She states "To do this though American needs to move to a utility model." Frankly the issue should be one of bundling or tying in and the control is the existing antitrust laws. The problem with the utility model is all too well known. The FCC controlled and was controlled by AT&T before divestiture. The result was very slow development of technology and capability. The utility model sets prices based on a return on investment, namely the provider is guaranteed a profit based on invested capital, and their costs are covered no matter how inefficient. The result is a capital intensive and bloated system. Better would be a real competitive market where the barriers to entry are not enforced by the Government but the Government enforces the antitrust laws already on the books.

On p 267 she also makes statements regarding the costs of fiber. Based upon my experience doing this her numbers are categorically wrong. The most significant costs not included is the franchise acquisition costs, often in excess of $1,000 per sub, plus the costs of pole attachments and the delay costs associated with dealing with local regulators.

On p 267 she further states "The government standardizes, regulates, provides tax subsidies, and puts price supports in place every day." One could just imagine the Government standardizing wireless or broadband structures. They have no technical depth and furthermore the politics that would encompass such would be unimaginable. The Government should just stand apart from any standards. Let the technical people deal with that, and live or die by their decisions.

On p 269 she gets into the Internet discussion. Again for some reason she uses Franken as a foil but they are a distraction. The fact is that indeed ARPA, specifically IPTO, developed the early Internet deployment in the 1970s. In fact I ended up with the task of deploying the international circuits for IPTO. Then through the early 1980s it somewhat slowed but with the help of Kahn and Cerf the IETF was formed and began an open source development of what could be called standards, albeit very flexible one. Then the DOD abandoned it and spun off a separate network and the result almost went nowhere but at the end of the 80s we saw such academic networks such as NYSERNET evolve and the NREN come forth. Thus the Internet history is a mixed bag of public and private parentage and the bright line alluded to by the author is without merit.

The book is worth reading but only, in my opinion, if one can work through the mire of the author's statements for which she has no basis or those which are just outright technically in error.

The classic book on telephone change is the Coll book, Deal of the Century, outlining the breakup of ATT. Coll is a brilliant writer and deals with facts he both understands and can explain. The author of this book had such an opportunity but she clearly went well beyond her ken and the result is that between the facts and opinions are prognostications based on fact-less presumptions. The issue she is focusing on is truly an important issues and needs as much public understanding as possible. The cable companies have secured for themselves a protected niche and have further vertically integrated in a manner which later 19th century antitrust minds would find abhorrent. This is ever so true in view of the channels they control; information and communications.

Monday, July 29, 2013

Overdiagnosis and Death

A recent focus on the "over diagnosis" of cancer has been highlighted in JAMA and the NY Times. The Times states:

The concern, however, is that since doctors do not yet have a clear way to tell the difference between benign or slow-growing tumors and aggressive diseases with many of these conditions, they treat everything as if it might become aggressive. As a result, doctors are finding and treating scores of seemingly precancerous lesions and early-stage cancers — like ductal carcinoma in situ, a condition called Barrett’s esophagus, small thyroid tumors and early prostate cancer. But even after aggressively treating those conditions for years, there has not been a commensurate reduction in invasive cancer, suggesting that overdiagnosis and overtreatment are occurring on a large scale.

That is a simple way to say that  we know only after the fact what is aggressive and what is indolent and not before the fact. The classic example is prostate cancer. Of course we do not want to treat indolent prostate cancer and of course we do want to treat aggressive prostate cancer. But how do we tell. There are new tests which purport to assit tyhat process but in my opinion in a wek manner.

The JAMA piece states:

An ideal screening intervention focuses on detection of disease that will ultimately cause harm, that is
more likely to be cured if detected early, and for which curative treatments are more effective in early-stage disease. Going forward, the ability to design better screening programs will depend on the ability to better characterize the biology of the disease detected and to use disease dynamics (behavior over time)and molecular diagnostics that determine whether cancer will be aggressive or indolent to avoid over treatment. Understanding the biology of individual cancers is necessary to optimize early detection programs and tailor treatments accordingly. The following recommendations were made to the National Cancer Institute for consideration and dissemination.


Yet we are not there  yet. We understand what genes may cause aggressive disease but we are at a loss of epigenetic characteristics, at least now.

JAMA continues:

The original intent of screening was to detect cancer at the earliest stages to improve outcomes;however,detection of cancers with better biology contributes to better outcomes. Screening always results in identifying more indolent disease. Although non-physician has the intention to over treat or over diagnose cancer, screening and patient awareness have increased the chance of identifying a spectrum of cancers,some of which are not life threatening.Policies that prevent or reduce the chance of over diagnosis and avoid over treatment are needed, while maintaining those gains by which early detection is a major contributor to decreasing mortality and locally advanced disease. The recommendations of the task force are intended as initial approaches. Physicians and patients should engage in open discussion about these complex issues.The media should better understand and communicate the message so that as a community the approach to screening can be improved.

The recommendation should be to better understand what the true tell tale imprints of cancer are. DIC may or may not be bad, HG PIN may also be so, and melanoma in situ may just be a wandering melanocyte. At what point do we jump to attention and attack?

Sunday, July 28, 2013

Ad Hoc Propiter Hoc

There is a recent paper lauded by the left which in essence states:

"People who have money have offspring who succeed financially disproportionately to those whose parents are not that well off and thus to equalize the outcomes we should tax those who have succeeded so as to enable the offspring of those who have not to get an equal chance."

The paper's conclusion is:

In this paper, we explore the possibility that equalizing individuals’ economic outcomes may help to equalize their children’s opportunities: that is, when poor parents have more disposable income, their children’s performance improves and they have greater opportunity to succeed. 

We study the effect that this intergenerational connection has on optimal tax policy, which will take advantage of this relationship to shape the ability distribution over time. But exactly how it will do so depends on complex interactions between natural ability and the returns to investment in human capital. Ours is the …first paper we know of to model this complexity and derive policy implications.

We characterize conditions describing optimal tax policy when children’s abilities depend on both inherited characteristics and parental (…nancial) resources. On the intratemporal margin, we highlight competing effects of this endogeneity. If parental resources have greater marginal effects on the children of low-skilled parents, then optimal distortions may be smaller at low incomes because of their positive effects on overall tax revenues and the incentives of high-skilled parents. 

On the other hand, larger distortions at low incomes have a bene…fit in encouraging preceding generations to invest in their children’s ability pushes in the other direction. In the end, the implications for optimal marginal distortions are ambiguous. On the intertemporal margin, we show that optimality requires a more sophisticated understanding of the cost of raising social welfare  through transfers across generations, in particular including the effects of one generation’s resources on future generations’tax payments and utilities....

Of course, future research may be able to improve our understanding of the tax policy studied in this paper. For example, when a panel dataset of suffcient duration allows us to link data on parents’and children’s wages, this will allow estimates of the intergenerational effect of parental income on parent-child wage transitions. Incorporating other dimensions of parental influence is another natural next step. 

We have shown (in the Appendix) that parental leisure versus work time does not seem to exert an important infl‡uence in this case, but one might study how the composition of parents’available resources (i.e., as disposable income or in-kind, such as education) affects the results. Such analyses may have implications for a broader class of policies that, like the taxes in this paper, could be used to affect –rather than merely respond to–the dynamics of the ability distribution.

Somehow they have left out the genetic facts that dumb parents often give rise to dumb offspring, and that drug addicted parents no matter what the gene pool is will not provide an equal opportunity tor the offspring, and that single parent offspring are disadvantaged by the loss of income, and the list goes on. As one who deals in genetics, that is often the dominant factor and success is all to often delimited by that factor. But the authors seem ignorant of that issue.

One can state a counter example for every such success. Take the Koch brothers, MIT grads but with a father who set them up. Then take O'Malley, no rich father, a Manhattan College grad, and just as successful. Was parental income redistribution the determinant? Hardly, it was competence and drive. I can think of dozens of my MIT MITES minority students, all successful, despite in many cases low income parentage. Then again one can think of the generation who made it, the generation who spent it, and the generation who lost it. Genes go just so far.

But taxing those who achieve to redistribute it to those who do not is not the answer.

The answer is simple. Scholarships based solely upon academic performance.  The old NY State Regents Scholarships. You take a test and your grade determines whether you get it or not. No issues regarding your volunteer work, class projects, advanced placement exams. Just one test, anyone can try it, if you are in the top, say 10%, you get a free ride to college. And the college must accept you. That is the answer. Hang out the gold ring, make it fair, simple, and independent of anything else. Even rich kids can apply.

Saturday, July 27, 2013

More Thoughts on the FED's Balance Sheet

The above is the FED's Assets as of July 26, 2013. Still ramping up and for some serious reasons. Let us take a look at the most significant elements below:
Clearly we have really only two to deal with so let's look at them. Namely the Treasury holdings and the Mortgage Backed Securities.
This shows that the FED continues to buy tons of this stuff now amounting to over $4 trillion. This is the most significant part of the Asset Base. Now when this starts to unwind we will have significant problems. The new FED Chair should be asked what to do about this mess. Are we to stop it and if so when do we unwind it and at what cost.

This will be the challenge for the next decade!

Thursday, July 25, 2013

Correlation and Causation

The Cancer Epidemiology journal reports on a correlation between a woman's height and her risk of post-menopausal cancer. They state:

Height was significantly positively associated with risk of all cancers (HR = 1.13; 95% CI, 1.11–1.16), as well as with cancers of the thyroid, rectum, kidney, endometrium, colorectum, colon, ovary, and breast, and with multiple myeloma and melanoma (range of HRs: 1.13 for breast cancer to 1.29 for multiple myeloma and thyroid cancer). These associations were generally insensitive to adjustment for confounders, and there was little evidence of effect modification.This study confirms the positive association of height with risk of all cancers and a substantial number of cancer sites. 

 The causal argument is that height means more active growth factors and more active growth factor means a higher chance for cancer.

Boy is that a stretch. Take melanoma, something for which I have a modicum of understanding (see my Draft book on Melanoma Genomics). Is height a causative agent? That in my opinion goes beyond being just a stretch. Frankly I cannot think of a single pathway, epigenetic, ECM, or other factor for which that works. 

The problem all too often is correlation is not causation. They both may have a similar cause, but that is left as an exercise for the reader. Although it gets great play from the Press (See NY Times):

Height can be influenced by a number of factors beyond genetics. The amount and type of foods consumed in childhood can influence height, and childhood nutrition may also play some role in cancer risk. A higher circulating level of a protein called insulin-like growth factor, which can be influenced by factors like exercise, stress, body mass index and nutrition, is also associated with both increased height and an increased cancer risk.

They found that for every 4-inch change in height, there was a 13 percent increase in risk for developing any type of cancer. The cancers most strongly associated with height were cancers of the kidney, rectum, thyroid and blood. Risk for those cancers increased by 23 to 29 percent for every 4-inch increase in height.

I would not worry too much about this issue. You see I am 6'3".

Recession Statistics Update 2013

From time to time we examine the Recession Stats from the St Louis Fed. They are not too heart warming.
First, Industrial Production is lagging from the average.
Second, Income is the worst factor. People are not being paid. This is clearly the worse recovery from this perspective.
Then as we look at Employment it too is slow. Nothing new here except it is defining a new bottom.
Retail sales is on par, which means people are spending but not investing.
The GDP is defining a new bottom as we have seen. Now let us examine the elements of the GDP:
Personal consumption is low, also defining a bottom.


Private Investment is on par.
Government spending is the lowest yet. One wonders what is happening here. Then we have Exports:
And finally Imports:
The news is not good some five years after the bottom. Then again there were no real expectations from the team in place.

Yield Data

The steepest yield curve was back in 2010. We see it creeping up again, low short term and growing long term. This will hit bonds a bit.
Here is a summary across time and one can see the increase.
Here is the spread and it is starting to back to where it was a few years ago. Still lots of room but there is substantial movement.


Sunday, July 21, 2013

The MOOCs: Real or a Passing Fad?

The Economist has a piece on the MOOCs, those on line university like courses.

The article states:

But Anant Agarwal, the boss of EdX, reckons the MOOC providers will be more like online airline-booking services, expanding the market by improving the customer experience. Sebastian Thrun, Udacity’s co-founder, thinks the effect will be similar in magnitude to what the creation of cinema did to demand for staged fiction: he predicts a tenfold increase in the market for higher education.

Sceptics point to the MOOCs’ high drop-out rates, which in some cases exceed 90%. But Coursera and Udacity both insist that this reflects the different expectations of consumers of free products, who can browse costlessly. Both firms have now studied drop-out rates for those students who start with the stated intention of finishing, and found that the vast majority of them complete the courses.

I have tried, and am still trying, some of these courses, both Coursera and EdX. I have now four under my belt, I finished one, dropped two and are in process with another. My experience can be summarized as follows:

1. They have a lot to learn about teaching. Edx has a system which is reminiscent of software circa 1975. It is inflexible and if you have any question about what they are asking you have no way to find out how to answer it. They have a lot to learn from Google. Edx appears to have been designated by Microsoft.

2. I have done two Coursera courses; one taught by a Scot and the second an Aussie. Frankly I cannot understand at least half of what is said. They speak as if they were in class with local students and not dealing with a global class.

3. Coursera is now trying to monetize itself by charging for a certificate. If you just want to learn it is still free. Would I pay for any of these, not yet. I pay for professional courses from Mass Med Society and HMS but they are real and frankly better than any of the MOOCs.

4. EdX does not have a business model but it has a following. My problem is that the software must be much more adaptive. Strange to say this since it was originated from MIT CS group, but then again these folks in my experience often have the arrogance of the Chosen. If your mission is the world one should and must adapt.

5. One of the most interesting parts of these courses is the Discussion fora. I am not a user of Facebook, frankly it is too intrusive, so this is as close as I get. The comments are written by ego driven and arrogant and mindless morons. I have never seen such blather. They snipe at each other and make senseless remarks. One wonder what benefit they are. On the other hand it is a window to the minds of those who follow the material. I have also seen the classic "teacher's pet" writing how wonderful the instructor is. What good is this, there are issues to be remedied and if one remarks about such the anti-bodies surround you and the cells consume you in blather.

Thus the MOOCs need focus, they need professionalism, they need control, and they need a great deal of software improvement, especially EdX. I would not worry about MIT or Harvard, yet.

Friday, July 19, 2013

The Heat and History

The NY Times has a short piece on the hottest summer on record, not now but in 1953. They state:

In 1953, New York baked for 12 all-but-unbearable days in a row. From Aug. 24 to Sept. 4, the high temperature in Central Park was never less than 90 degrees, and the nights seemed almost as warm. Consolidated Edison, which prepared a PowerPoint presentation on past heat waves in anticipation for this one, calculated that the average temperature for those 12 days was 95 degrees. The peak was a scalding, scorching 102 on Sept. 2. 

I remember that summer quite well.  We lived in our new post WW II Cape Cod house and I had the bedroom on the second floor with the window open to the garage next door, about 4" or so, and thus there was no circulation of air. I spent nights sleeping on the concrete in the back yard on WW II cots. There was no air conditioning and we had no car at the time, we went everywhere by bus. Or we walked. No one on Staten Island went to the beaches, they were polluted daily by New Jersey sewerage.

In 1953 there was no place to get away from the heat, you just sat there an by day twelve or so you  just started to wilt. Unlike Thailand where it hits 90 in the day time but drops to 70 at night in this New York Heat Wave it just sat there, and stayed 90 all the time.

What amazed me was one of the comments:

Fascinating to look at old photos like the one accompanying this one. People were so thin in the olden days.

Those were the days when we did not have a McDonald's, no Starbucks, no Twinkies, because we had little extra cash. Pizza was a once a summer treat. Yes, people were thinner. It was not genetic then, nor genetic now. People just ate less then and more now. That is what amazes me about the obesity debate, it is just consumption. Global warming not withstanding.

Finally, one should remember also that during the same period of the early 1950s we have multiple hurricanes. The heat wave heats the ocean and in turn sets up conditions for north east hurricanes. So perhaps Sandy was just a prelude. Hope not but history sets a precedent.

Wednesday, July 17, 2013

My Early Morning Job

Just for information here is a sample of my fields of lilies. I have well over 4,000 crosses and 600 named hybrids. Summer has been hot and we are reaching 500 crosses as of today. Hopefully make it to 600+. The daylilies love the heat, but even at 4 AM this morning it was worse than a bad day in Bangkok. Enjoy looking at the Garden if you want.

Tuesday, July 16, 2013

Clinical Trials and Confusion

In a recent article in the NY Times the author states in discussing a recent ASCO meeting and a presentation on a specific drug:

The centerpiece of the country’s drug-testing system — the randomized, controlled trial — had worked. Except in one respect: doctors had no more clarity after the trial about how to treat brain cancer patients than they had before. Some patients did do better on the drug, and indeed, doctors and patients insist that some who take Avastin significantly beat the average. But the trial was unable to discover these “responders” along the way, much less examine what might have accounted for the difference... Indeed, even after some 400 completed clinical trials in various cancers, it’s not clear why Avastin works (or doesn’t work) in any single patient. “Despite looking at hundreds of potential predictive biomarkers, we do not currently have a way to predict who is most likely to respond to Avastin and who is not,” says a spokesperson for Genentech, a division of the Swiss pharmaceutical giant Roche, which makes the drug. 

 The author concludes, and I wonder if he knows what he is concludoing:

Part of the novelty lies in a statistical technique called Bayesian analysis that lets doctors quickly glean information about which therapies are working best. There’s no certainty in the assessment, but doctors get to learn during the process and then incorporate that knowledge into the ongoing trial. 

 We have argued that again and again. BRAF inhibitors are Bayesian in a sense. Namely we understand one of the deficiencies in a melanoma for a class of patients. We can then address that deficiency. That is in essence Bayesian, namely P[Outcome|Patient Genomic Condition].

In fact the development of our understanding of cancer pathways, inter-cellular matrices and their interactions, and the immune system and its control mechanism, will allow for patient, and even tumor cell, specific therapeutics.

The problem we have faced in therapeutics is that they have been meat cleaver approaches, cell cycle inhibitors which inhibited all cell cycles thus eliminating hair and other debilitating effects.

Now we can understand what genetic pathways have broken down and we can address that specific problem.

The Times author also states:

In a famous 2005 paper published in The Journal of the American Medical Association, Dr. Ioannidis, an authority on statistical analysis, examined nearly four dozen high-profile trials that found a specific medical intervention to be effective. Of the 26 randomized, controlled studies that were followed up by larger trials (examining the same therapy in a bigger pool of patients), the initial finding was wholly contradicted in three cases (12 percent). And in another 6 cases (23 percent), the later trials found the benefit to be less than half of what was first reported. 

In JAMA that article by Ioannidis states:

Clinical research on important questions about the efficacy of medical interventions is sometimes followed by subsequent studies that either reach opposite conclusions or suggest that the original claims were too strong. Such disagreements may upset clinical practice and acquire publicity in both scientific circles and in the lay press. Several empirical investigations have tried to address whether specific types of studies are more likely to be contradicted and to explain observed controversies. For example, evidence exists that small studies may sometimes be refuted by larger ones

This paper has received a great deal of criticism. Yet the biggest concern in my opinion is that it lumps all trials together. Many trials are truly gross guesses. They are in effect meat cleaver therapeutics. The  newer genetic based therapeutics totally change that perspective.

NCI Data Base for Cancer Cell Lines

The NCI has published a pare in Cancer Research. The paper states:

The NCI-60 cell lines are the most frequently studied human tumor cell lines in cancer research. This panel has generated the most extensive cancer pharmacology database worldwide. In addition, these cell lines have been intensely investigated, providing a unique platform for hypothesis-driven research focused on enhancing our understanding of tumor biology. Here, we report a comprehensive analysis of coding variants in the NCI-60 panel of cell lines identified by whole exome sequencing, providing a list of possible cancer specific variants for the community. 

Furthermore, we identify pharmacogenomic correlations between specific variants in genes such as TP53, BRAF, ERBBs, and ATAD5 and anticancer agents such as nutlin, vemurafenib, erlotinib, and bleomycin showing one of many ways the data could be used to validate and generate novel hypotheses for further investigation. As new cancer genes are identified through large-scale sequencing studies, the data presented here for the NCI-60 will be an invaluable resource for identifying cell lines with mutations in such genes for hypothesis-driven research. To enhance the utility of the data for the greater research community, the genomic variants are freely available in different formats and from multiple sources including the CellMiner and Ingenuity websites. 

On the NCI web site they state:

 NCI scientists have developed a comprehensive list of genetic variants for each of the types of cells that comprise what is known as the NCI-60 cell line collection. This new list adds depth to the most frequently studied human tumor cell lines in cancer research, molecular pharmacology, and drug discovery. The NCI-60 cancer cell panel represents nine different types of cancer: breast, ovary, prostate, colon, lung, kidney, brain, leukemia, and melanoma. In this study, the investigators sequenced the whole exomes, or DNA coding regions, of each of NCI-60 cell lines, to define novel cancer variants and aberrant patterns of gene expression in tumor cells and to relate such patterns and variants to those that occur during the development of cancer. They also found correlations between specific variants in genes such as TP53, BRAF, ERBBs, and ATAD5 and the activity of anticancer agents such as nutlin, vemurafenib, erlotinib, and bleomycin.

 NIH introduced the tools such as CellMiner a year ago and it was in the following release:

Genomic sequencing and analysis have become increasingly important in biomedicine, but they are yielding data sets so vast that researchers may find it difficult to access and compare them.  As new technologies emerge and more data are generated, tools to facilitate the comparative study of genes and potentially promising drugs will be of even greater importance.  With the new tools, available at http://discover.nci.nih.gov/cellminer, researchers can compare patterns of drug activity and gene expression, not only to each other but also to other patterns of interest.  CellMiner allows the input of large quantities of genomic and drug data, calculates correlations between genes and drug activity profiles, and identifies correlations that are statistically significant. Its data integration capacities are easier, faster, and more flexible than other available methods, and these tools can be adapted for use with other collections of data.  

CellMiner is accessible at NCI.



Tuesday, July 9, 2013

"Ewe Whey" or Yes in Quebec

Learning a language is often a difficult task, but understanding English is near impossible. I started to understand the difference in three separate events. Once in Florence I tried my Staten Island Italian with a vendor. They stopped me and asked where I learned Italian. I told them on State Island. They told me I was speaking almost perfect Sicilian, a dialect not well accepted in Florence. Then there is French. I speak French, but would never be accepted in Paris, Normandy and Savoy is fine, but never Paris. I once had an investor from Quebec and had the good fortune to listen to his speak with those in Paris. Then it became clear. My next visit to Quebec I started to more clearly hear the differences.

Then of course there is England. Now once I met a colleague in a hotel near Buckingham palace, down from Victoria Station. Well it was Faulty Towers redux. Not only incomprehensible but the staff was just bumping into one another.

Now where does this take us, well to the MOOCs. I have been trying out a few and one was taught by a woman from Edinburgh. She was somewhat understandable except for the vitamin work where it was not v-eye-tah-mihn but vit-teh-men. Try that on for size. Now I am trying one with a woman from Melbourne. Now I really do not understand Australian. I have a better grasp of Thai. This woman speaks so quickly, uses a strong Aussie accent, and has zero ability to "teach". She clearly understands her material, and she is trying, but this is why MOOCS will never make it, no feedback. No one to ask. "What the hell did you just say!" That's New York for we really do not get your accent. Now being in New York it is not that we do not have accents, why I can take a cab ride and yet still communicate. Thank God we do not have any Ausssie cab drivers, no one would ever get to where they want.

Monday, July 8, 2013

Global Warming: Have a Contest

For some reason I have seen several blog discussions again on taxing carbon emissions. Let us assume that there is global warming, I see it in my early blooming daylily species, so I have to believe my own eyes. So set that part of the argument aside. Now is that good or bad, well for me it has allowed better hybridizing and perhaps a benefit. I have over fifty new hybrids ready this year, thanks in part to global warming.

Now Posner goes on suggesting

As long as global warming is gradual, and catastrophic effects are not felt for the next 50 to 100 years, there is room for hope that geoengineering will limit or even reverse global warming. Ways of trapping the carbon dioxide produced by burning oil, coal, natural gas, and forests may be developed or sunlight may be blocked by injecting sulfur compounds into the atmosphere, which would reduce the amount of sunlight that reaches the earth (though could create other forms of pollution—sulfur dioxide, for example, creates acid rain). Or safe means of piping carbon dioxide emitted from electrical generating plants underground might be developed. There are even suggestions for “whitening” roofs (on a very large scale) to increase earth’s reflection of the sun’s rays. 

But there is no guaranty that global warming will be gradual. It may turn abrupt; there are a number of examples in earth’s geological history when this happened. For example, a period called the “Younger Dryas” at the end of the last ice age is believed to have seen an increase in the average global temperature by 7 degrees centigrade, which is 12.6 degrees fahrenheit, in only 10 years. Were that to happen again, it would be an unbelievable catastrophe. The probability of abrupt global warming cannot be estimated; but heading off even catastrophic events that are uncertain can make good economic sense if the resources required to do so are modest. 

It is always good to see a lawyer prognosticate on science, so let that stand. How fast is the climate changing? No one really knows. Is it for better or for worse? Again net of the equation no one knows. I suspect the Polar Bears will adapt and eat what the Black Bear around my house does. But that is a tale for another day. Now Posner proposes:

A more efficient method of limiting global warming than regulatory controls such as proposed by the President (and that as described promise to be a bureaucratic nightmare) would be a tax on carbon emissions, which I advocated in my 2005 book and which a number of countries have adopted. 

 Frankly that in my opinion is total nonsense. You see we need electricity, we need to commute to work, we need the results of the infrastructure that create carbon dioxide. I have made that argument against Mankiw time and again. The poor schlub who has to commute to his job in Cambridge cleaning the halls of Harvard must drive at night with a few others in a broken down care from some distance. He has no choice. The same in most cities. There is no elasticity of demand for the poor folks, so tax them?

Even more so, why in God's name tax anything, it just feeds the Government monster, more Government useless workers getting higher salaries bigger benefits and treated to great parties and big bonuses. 

My favorite Economist, Frances Woolley makes several good points in her recent post.

Then I tried the economic freedom line: "A carbon tax makes fossil fuels more expensive, so people have an incentive to consume less. The great thing about them, though, is that people have a choice about how to cut their consumption." 

"Think about the alternative," I said, "Do you really want a bunch of new regulations, people saying what you can and cannot do? This way there's no need for a big government bureaucracy, and people decide what's best for them."

Neither of these arguments made the slightest impression on my cousin. "I don't believe this talk about revenue neutrality," he said, "there won't be cuts to other taxes. The government will just take the money and spend it on useless things like your salary." 

 Now I really like the insight Frances has, it is a rare form, keen insight into the obvious. Rare amongst economists. But the real point is that this is an engineering problem. Engineers face this all the time. Get rid of some byproduct, efficiently. Chem Es do this for a living. So why not solve the problem, not tax it. Even more so I propose a contest. The Government give a prize of say a billion dollars to the first entity to demonstrate an cost effective way to eliminate carbon dioxide. Make them rich. This combines all the key elements of a free market. It solves a problem, it rewards success, it does not tax, and it does NOT add a single Government employee. They did stuff like this when airplanes were new. Why not give it a try?

Friday, July 5, 2013

Employment: Bad News and Good News

Who would have thought when I started this Blog some five years ago that we would still be in the woods. Well I did given the folks who were taking charge. So no surprises, just a lost decade or two.

Now 7.6% is not good. Not good at all. But the percent of the population employed has increased, not to where it should be but more than what it was. And then increase is sustained at a positive level. It took 3 years to go from 42% to 43%. We need to be at 45%. That means another six years at this rate! Unless we find another Reagan.
We see a gap but a narrowing gap in the above. Again another six years to close at this rate.
The same as the above. Well just sit back and watch the corn grow.



Wednesday, July 3, 2013

How Does It Make Me Feel?

NEJM has an interesting piece on patient input. They state:

As an oncologist, when I sit with patients to discuss starting a new chemotherapy regimen, their first questions are often “How will it make me feel?” and “How did patients like me feel with this treatment?” Regrettably, this information is generally missing from U.S. drug labels and from published reports of clinical trials — the two information sources most commonly available to people trying to understand the clinical effects of cancer drugs.

 This is a serious issue not only with oncologists. The challenge is to meet the anticipated needs of the patient. Patients respond much better when what they are told matches what happens. Then there are no surprises. They can often deal with a great amount of discomfort much better than lesser amounts of uncertainty.

They continue:

The FDA has taken several recent steps toward encouraging inclusion of the patient perspective in drug development. It issued highly influential guidance on the use of patient-reported outcomes (PROs) in drug development, collaborated with the Critical Path Institute and industry to form the PRO Consortium with the aim of developing robust symptom-measurement tools, and obtained support from Congress in the fifth reauthorization of the Prescription Drug User Fee Act (PDUFA) to expand its internal expertise on the methodology of measuring PROs. (Unfortunately, allocated PDUFA funds have been withheld, which substantially impairs the FDA's ability to implement planned patient-centered programs.)

These FDA efforts are evident in the ruxolitinib label and in the label for abiraterone acetate, approved this year for metastatic prostate cancer, which describes beneficial delays in time to the development of pain and the need for opioid use. Yet in preapproval trials in patients with cancer, symptom or functional-status evaluations that meet the FDA's standards remain rare.

The FDA should clearly expand on this effort. In a sense the patient should have a peer reviewed assessment of what to expect.

These are the Folks Who Gave You the ACA

Arrogance is a human fault. Arrogance when combined with ignorance can be truly dangerous. Many who gave us the ACA were and are truly arrogant, even more so they have a hubris that exceeds what the gods would ever allow a human.

In today's NY Times one of those characters, in my opinion, states:

The employer mandate was included in the Affordable Care Act — which I helped design as an adviser to the Obama administration — in order to ensure the continuation of employer-sponsored insurance after the creation of state-sponsored health care exchanges. We didn’t want every employer to simply drop its coverage and send all its workers to the exchanges.


But complying is proving to be more difficult than expected for employers. The act requires businesses with more than 50 employees to provide coverage to their full-time workers or face a penalty of $2,000 per employee. A full-time worker is defined as someone who is paid for 30 or more hours of service a week or 130 hours a month. It sounds simple, but determining who should be counted turned out to be a nightmare.

Just read this. Sentence one boasts of his involvement. Sentence two says its implementation is a nightmare. I wrote extensively about this when it was going through the Congressional intestines. I wrote against it, spoke to my useless Senators, and to no avail.

This is merely a gimmick as a hope to save the 2014 elections. Most Americans are just ignorant, that is until it hits them personally. But this delay is just a delay of the inevitable. This plan is a disaster. Then again you voted for it folks!