Simple and lightweight C# REPL with Mono.CSharp

Recently I needed a small C# REPL because I wanted to test some code on a computer where I couldn't install Visual Studio and the code I wanted to test used a lot of (Service) references so I couldn't use something like LINQPad. I first started with Roslyn but had some issues with using it from Visual Studio 2012 and as I didn't want to spend too much time on this problem I went for Mono.CSharp.Evaluator. Mono.CSharp can be installed with npm: Install-Package Mono.CSharp and only adds one reference which is a nice side-effect of using Mono.CSharp instead of the Roslyn scripting API.

Simple C# REPL with Mono.CSharp

Here is the full code (github) of my simple REPL. If you end a statement with ';' then it will run without returning an output, use this when you want to create objects . If you omit the ';' then the expression is evaluated and the result printed. Note that I added a reference to my program to the evaluator in order to be able to call the Factorial function. Also fon't to forget to run the necesssary Using statements.

// Install-Package Mono.CSharp
using System;
using System.Reflection;
using Mono.CSharp;
namespace SimpleREPL
{
  public class ExtraMath
  {
    public static int Factorial(int n)
    {
      // naive implementation but fast enough for small n
      int result = 1;
      for (int i = 1; i <= n; i++)
      {
        result *= i;
      }
      return result;
    }
  }

  internal class Program
  {
    private static void Main(string[] args)
    {
      Console.WriteLine("Starting Simple C# REPL, enter q to quit");
      var evaluator = new Evaluator(new CompilerContext(
        new CompilerSettings(),
        new ConsoleReportPrinter()));
      evaluator.ReferenceAssembly(Assembly.GetExecutingAssembly());
      evaluator.Run("using System;");
      evaluator.Run("using SimpleREPL;");
      while (true)
      {
        Console.Write("> ");
        var input = Console.ReadLine();
        input = input.TrimStart('>', ' ');
        if (input.ToLower() == "q")
        {
          return;
        }
        try
        {
          if (input.EndsWith(";"))
          {
            evaluator.Run(input);
          }
          else
          {
            var output = evaluator.Evaluate(input);
            Console.WriteLine(output);
          }
        }
        catch
        {
          Console.WriteLine("Error in input");
        }
      }
    }
  }
}

Weighted Relative Neighborhood Graph in R based on cccd::rng

The R package cccd contains a nice implementation of the Relative Neighborhood Graph (rng) but in the current version 1.5 it returns a non-weighted igraph. But for one of my experiments I needed the weighted version so I've slightly changed the code to get an igraph with weights.
rng <- function (x = NULL, dx = NULL, r = 1, method = NULL, usedeldir = TRUE,
          open = TRUE, k = NA, algorithm = "cover_tree", weighted = TRUE) {
  if (is.na(k)) {
    if (is.null(dx)) {
      if (is.null(x))
        stop("One of x or dx must be given.")
      dx <- as.matrix(proxy::dist(x, method = method))
    }
    else {
      usedeldir <- FALSE
    }
    n <- nrow(dx)
    A <- matrix(0, nrow = n, ncol = n)
    if (is.vector(x))
      x <- matrix(x, ncol = 1)
    if (usedeldir && ncol(x) == 2) {
      del <- deldir::deldir(x[, 1], x[, 2])
      for (edge in 1:nrow(del$delsgs)) {
        i <- del$delsgs[edge, 5]
        j <- del$delsgs[edge, 6]
        d <- min(apply(cbind(dx[i, -c(i, j)], dx[j, -c(i, j)]), 1, max))
        rd <- r * dx[i, j]
        if ((open && rd < d) || rd <= d) {
          A[i, j] <- A[j, i] <- rd
        }
      }
    } else {
      diag(dx) <- Inf
      for (i in 1:n) {
        for (j in setdiff(1:n, i)) {
          d <- min(apply(cbind(dx[i, -c(i, j)], dx[j, -c(i, j)]), 1, max))
          rd <- r * dx[i, j]
          if ((open && rd < d) || rd <= d) {
            A[i, j] <- A[j, i] <- rd
          }
        }
      }
    }
    diag(A) <- 0
    out <- graph.adjacency(A, mode = "undirected", weighted = weighted)
  } else {
    if (is.null(x))
      stop("x must not be null")
    n <- nrow(x)
    k <- min(k, n - 1)
    dx <- get.knn(x, k = k, algorithm = algorithm)
    edges <- NULL
    weights <- NULL
    for (i in 1:n) {
      i.indices <- dx$nn.index[i, ]
      i.dists <- dx$nn.dist[i, ]
      for (j in 1:k) {
        rd <- r * i.dists[j]/2
        j.indices <- dx$nn.index[i.indices[j], ]
        j.dists <- dx$nn.dist[i.indices[j], ]
        rd <- r * i.dists[j]
        S <- setdiff(intersect(i.indices, j.indices),
                     c(i, i.indices[j]))
        if (length(S) > 0) {
          d <- Inf
          for (si in S) {
            a <- which(i.indices == si)
            b <- which(j.indices == si)
            d <- min(d, max(i.dists[a], j.dists[b]))
          }
          if ((open && rd < d) || rd <= d) {
            edges <- cbind(edges, c(i, i.indices[j]))
            weights <- cbind(weights, rd)
          }
        }
      }
    }
    g <- graph(edges, n = n, directed = FALSE)
    if( weighted ) {
      edge.attributes(g) <- list(weight=weights)
    }
    out <- simplify(g, edge.attr.comb = "first")
  }
  if (!is.null(x)) {
    out$layout <- x
  }
  out$r <- r
  out
}

Five Things We Need to Know About Technological Change by Neil Postman

What follows are my notes and highlighted sections from an excellent essay from Neil Postman. My comments are in italic.
Source: http://web.cs.ucdavis.edu/~rogaway/classes/188/materials/postman.pdf.

Five Things We Need to Know About Technological Change

First Idea

The first idea is that all technological change is a trade-off.

This means that for every advantage a new technology offers, there is always a corresponding disadvantage. The disadvantage may exceed in importance the advantage, or the advantage may well be worth the cost.

note: cost and benefits are different for different persons
action: reflect on cost and benefits of the technology you use (plane, car, computer, smartphone, specific websites, webshops, apps, food tech, health tech, ebooks, "free" books/pdfs, skype, telephone, messenger/sms, ...)

Perhaps the best way I can express this idea is to say that the question, “What will a new technology do?” is no more important than the question, “What will a new technology undo?” Indeed, the latter question is more important, precisely because it is asked so infrequently. One might say, then, that a sophisticated perspective on technological change includes one’s being skeptical of Utopian and Messianic visions drawn by those who have no sense of history or of the precarious balances on which culture depends. In fact, if it were up to me, I would forbid anyone from talking about the new information technologies unless the person can demonstrate that he or she knows something about the social and psychic effects of the alphabet, the mechanical clock, the printing press, and telegraphy. In other words, knows something about the costs of great technologies.

Idea Number One, then, is that culture always pays a price for technology.

Second Idea

This leads to the second idea, which is that the advantages and disadvantages of new technologies are never distributed evenly among the population. This means that every new technology benefits some and harms others. There are even some who are not affected at all.

The questions, then, that are never far from the mind of a person who is knowledgeable about technological change are these: Who specifically benefits from the development of a new technology? Which groups, what type of person, what kind of industry will be favored? And, of course, which groups of people will thereby be harmed?

That is why it is always necessary for us to ask of those who speak enthusiastically of computer technology, why do you do this? What interests do you represent? To whom are you hoping to give power? From whom will you be withholding power?

I do not mean to attribute unsavory, let alone sinister motives to anyone. I say only that since technology favors some people and harms others, these are questions that must always be asked. And so, that there are always winners and losers in technological change is the second idea.

Third Idea

Embedded in every technology there is a powerful idea, sometimes two or three powerful ideas. These ideas are often hidden from our view because they are of a somewhat abstract nature. But this should not be taken to mean that they do not have practical consequences.

Perhaps you are familiar with the old adage that says: To a man with a hammer, everything looks like a nail. We may extend that truism: To a person with a pencil, everything looks like a sentence. To a person with a TV camera, everything looks like an image. To a person with a computer, everything looks like data. I do not think we need to take these aphorisms literally. But what they call to our attention is that every technology has a prejudice. Like language itself, it predisposes us to favor and value certain perspectives and accomplishments. In a culture without writing, human memory is of the greatest importance, as are the proverbs, sayings and songs which contain the accumulated oral wisdom of centuries....  The television person values immediacy, not history. And computer people, what shall we say of them? Perhaps we can say that the computer person values information, not knowledge, certainly not wisdom. Indeed, in the computer age, the concept of wisdom may vanish altogether.

note: see also Nicolas Carr (impact of Google on thinking/memory)

The third idea, then, is that every technology has a philosophy which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards. This idea is the sum and substance of what the great Catholic prophet, Marshall McLuhan meant when he coined the famous sentence, “The medium is the message.”

Fourth Idea

Technological change is not additive; it is ecological.

A new medium does not add something; it changes everything. In the year 1500, after the printing press was invented, you did not have old Europe plus the printing press. You had a different Europe. After television, America was not America plus television. Television gave a new coloration to every political campaign, to every home, to every school, to every church, to every industry, and so on.

That is why we must be cautious about technological innovation. The consequences of technological change are always vast, often unpredictable and largely irreversible. That is also why we must be suspicious of capitalists. Capitalists are by definition not only personal risk takers but, more to the point, cultural risk takers. The most creative and daring of them hope to exploit new technologies to the fullest, and do not much care what traditions are overthrown in the process or whether or not a culture is prepared to function without such traditions. Capitalists are, in a word, radicals.

Fifth Idea

I come now to the fifth and final idea, which is that media tend to become mythic. I use this word in the sense in which it was used by the French literary critic, Roland Barthes. He used the word “myth” to refer to a common tendency to think of our technological creations as if they were God-given, as if they were a part of the natural order of things. I have on occasion asked my students if they know when the alphabet was invented. The question astonishes them. It is as if I asked them when clouds and trees were invented. The alphabet, they believe, was not something that was invented. It just is. It is this way with many products of human culture but with none more consistently than technology. Cars, planes, TV, movies, newspapers—they have achieved mythic status because they are perceived as gifts of nature, not as artifacts produced in a specific political and historical context.

When a technology become mythic, it is always dangerous because it is then accepted as it is, and is
therefore not easily susceptible to modification or control.

What I am saying is that our enthusiasm for technology can turn into a form of idolatry and our belief in its beneficence can be a false absolute. The best way to view technology is as a strange intruder, to remember that technology is not part of God’s plan but a product of human creativity and hubris, and that its capacity for good or evil rests entirely on human awareness of what it does for us and to us.

Conclusion

And so, these are my five ideas about technological change. First, that we always pay a price for technology; the greater the technology, the greater the price. Second, that there are always winners and losers, and that the winners always try to persuade the losers that they are really winners. Third, that there is embedded in every great technology an epistemological, political or social prejudice. Sometimes that bias is greatly to our advantage. Sometimes it is not. The printing press annihilated the oral tradition; telegraphy annihilated space; television has humiliated the word; the computer, perhaps, will degrade community life. And so on. Fourth, technological change is not additive; it is ecological, which means, it changes everything and is, therefore, too important to be left entirely in the hands of Bill Gates. And fifth, technology tends to become mythic; that is, perceived as part of the natural order of things, and therefore tends to control more of our lives than is good for us.

If we had more time, I could supply some additional important things about technological change but I will stand by these for the moment, and will close with this thought. In the past, we experienced technological change in the manner of sleep-walkers. Our unspoken slogan has been “technology über alles,” and we have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. This is a form of stupidity, especially in an age of vast technological change. We need to proceed with our eyes wide open so that we may use technology rather than be used by it.

Workaround object 'couleursIn' not found in R package eVenn version 2.2

When trying to define a venn diagram with custom colors in the eVenn diagram:

set.seed(42)
d <- data.frame(a = sample(rep(c(0,1), 20)), 
                b = sample(rep(c(0,1), 20)), 
                c = sample(rep(c(0,1), 20)))
evenn(matLists=as.matrix(d), display = TRUE, couleurs = c("#90BA6E","#956EAD","#9F5845"))

I encountered the following error:

Error: object 'couleursIn' not found

This error only occurs when you pass a vector with multiple colors to couleurs and you've set Solid = TRUE, which is the default value. First thing to do after getting the error is calling dev.off() as the error will have left a connection to the png file open. To fix the error you have to define a variable couleursIn with colors with alpha values if you want to create similar venn diagrams as the default ones but with different colors:

couleursIn <- c("#90BA6E80","#956EAD80","#9F584580")
evenn(matLists=as.matrix(d), display = TRUE, couleurs = c("#90BA6E","#956EAD","#9F5845"))

Note the 80 at the end of each color which stands for an alpha value of 128 which is 50%. This can be checked with col2rgb("#90BA6E80", alpha = TRUE).

Workaround missing support for as.formula in R package maxlike version 0.1-5

Today I noticed that the predict function for maxlike didn't support models build with formulas from "as.formula". The error I got was:

Error in predict.maxlikeFit(model, data.frame(p), ...) : 
  at least 1 covariate in the formula is not in rasters.

I send in a pull request but in the mean time you can workaround this problem with the following wrapper for the maxlike function:

## workaround missing support for as.formula in maxlike
maxlike <- function(formula, ...) {
  m <- maxlike::maxlike(formula, ...)
  m$call$formula <- formula
  return(m)
}

Workaround "n.trees" is missing in R package gbm version 2.1.1

Just short note to anyone encountering following error in the gbm package version 2.1.1:

 Error in paste("Using", n.trees, "trees...\n") : 
  argument "n.trees" is missing, with no default 

This issue will be fixed in the next version but in the mean time I'll post my workaround. If you add following function to your code then everything should work:


## work around bug in gbm 2.1.1
predict.gbm <- function (object, newdata, n.trees, type = "link", single.tree = FALSE, ...) {
  if (missing(n.trees)) {
    if (object$train.fraction < 1) {
      n.trees <- gbm.perf(object, method = "test", plot.it = FALSE)
    }
    else if (!is.null(object$cv.error)) {
      n.trees <- gbm.perf(object, method = "cv", plot.it = FALSE)
    }
    else {
      n.trees <- length(object$train.error)
    }
    cat(paste("Using", n.trees, "trees...\n"))
    gbm::predict.gbm(object, newdata, n.trees, type, single.tree, ...)
  }
}

Notes and reflections on From here to human-level AI

The Ghent strong AI meetup group will discuss on the 20th of march the 2007 paper from John McCarthy From here to human-level AI so I decided to highlight some parts and write-up my notes and thoughts.

1. What is human-level AI

There are two approaches to human-level AI, but each presents difficulties. It isn’t a question of deciding between them, because each should eventually succeed; it is more a race.

  1. If we understood enough about how the human intellect works, we could simulate it.
  2. To the extent that we understand the problems achieving goals in the world presents to intelligence we can write intelligent programs. That's what this article is about.
    Much of the public recognition of AI has been for programs with a little bit of AI and a lot of computing.

There are some big projects and a lot of researchers attacking the human-level AI problem with one of the two approaches. The first approach, or at least part of it, is used by the American BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies) and the European Human Brain Project. The second approach and especially everything related to deep learning has recently had a fair amount of publicity due to human-level and super-human results on some pattern recognition tasks (image classification, face verification) and games (human-level performance on 29 Atari games). For an overview and history of deep learning you can check out the 88 page overview article from Jürgen Schmidhuber. As a side note Jürgen Schmidhuber recently did an interesting AMA (ask me anything) on Reddit (summary from FastML).

2. The common sense informatic situation

The key to reaching human-level AI is making systems that operate successfully in the common sense informatic situation.
In general a thinking human is in what we call the common sense informatic situation. It is more general than any bounded informatic situation. The known facts are incomplete, and there is no a priori limitation on what facts are relevant. It may not even be decided in advance what phenomena are to be taken into account. The consequences of actions cannot be fully determined. The common sense informatic situation necessitates the use of approximate concepts that cannot be fully defined and the use of approximate theories involving them. It also requires nonmonotonic reasoning in reaching conclusions.

Nonmonotonic reasoning = a logic is non-monotonic if some conclusions can be invalidated by adding more knowledge (source).

Common sense facts and common sense reasoning are necessarily imprecise. The imprecision necessitated by the common sense informatic situation applies to computer programs as well as to people.

3. The use of mathematical logic

Mathematical logic was devised to formalize precise facts and correct reasoning. Its founders, Leibniz, Boole and Frege, hoped to use it for common sense facts and reasoning, not realizing that the imprecision of concepts used in common sense language was often a necessary feature and not always a bug. The biggest success of mathematical logic was in formalizing purely mathematical theories for which imprecise concepts are unneeded. Since the common sense informatic situation requires using imprecise facts and imprecise reasoning, the use of mathematical logic for common sense has had limited success. This has caused many people to give up. Others devise extended logical languages and even extended forms of mathematical logic.

Further on he notes that using different concepts and different predicate and function symbols in a new mathematical logic language might still make mathematical logic adequate for expressing common sense. But he is not very optimistic.

Success so far has been moderate, and it isn’t clear whether greater success can be obtained by changing the concepts and their representation by predicate and function symbols or by varying the nonmonotonic formalism.

4. Approximate concepts and approximate theories

Other kinds of imprecision are more fundamental for intelligence than numerical imprecision. Many phenomena in the world are appropriately described in terms of approximate concepts. Although the concepts are imprecise, many statements using them have precise truth values

He follows up with two clarifying examples, one about the concept Mount Everest, the other about the concept welfare of a chicken. Mount Everest is an approximate concept because the exact pieces of rock and ice that constitute it are unclear. But it is still possible to infer solid conclusions based on a foundation built on a quicksand of approximate concepts without definite extensions e.g. if you haven't been to Asia then you've never climbed Mount Everest. The core of the welfare of a chicken problem is: is it better to raise a chicken with care and nice food and then slaughter it or would it have a better life in the wild, risking starvation and foxes ? McCarthy concludes from this:

There is no truth of the matter to be determined by careful investigation of chickens. When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.

In order to reach human-level AI we'll have to be able to approximate concepts in a way that the computer can reason about them.

5. Nonmonotonic reasoning

6. Elaboration tolerance

7. Formalization of context

8. Reasoning about events - especially action

Human level intelligence requires reasoning about strategies of action, i.e. action programs. It also requires considering multiple actors and also concurrent events and continuous events. Clearly we have a long way to go.

9. Introspection and self-awareness

People have a limited ability to observe their own mental processes. For many intellectual tasks introspection is irrelevant. However, it is at least relevant for evaluating how one is using one’s own thinking time. Human-level AI will require introspective ability. In fact programs can have more than humans do, because they can examine themselves, both in source and compiled form and also reason about the current values of the variables in the program.

10. Heuristics

The largest qualitative gap between human performance and computer performance is in the area of heuristics, even though the gap is disguised in many applications by the millions-fold speed advantage of computers. The general purpose theorem proving programs run very slowly, and the special purpose programs are very specialized in their heuristics.
I think the problem lies in our present inability to give programs domain and problem dependent heuristic advice.

McCarthy advocates the usage of declarative heuristics and explains the concept of postponable variables in constraint satisfaction problems.

11. Psychological, social and political obstacles

In this article, McCarthy states that although the main problems in reaching human-level AI lay in the inherent difficulty of the scientific problems, research is hampered by the focus of the computer science world to connecting the basic research to applied problems. The artificial intelligence has encountered philosophical and ideological (religious) objections but the attacks on AI have been fairly limited.

As the general public gets more and more acquainted with the potential dangers, of human-level AI and especially super-human level AI, I believe, that the pressure against AI research will increase.

An interesting book by Nick Bostrom covering the dangers of AI is Superintelligence: Paths, Dangers, Strategies.

12. Conclusion

Between us and human-level intelligence lie many problems. They can be summarized as that of succeeding in the common sense informatic situation.

If you want to read more about the road to human-level AI and superintelligence and its possible consequences then I can recommend this 2 part article by Tim Urban on his blog Wait But Why.