For many kinds of factual questions, a large proportion of the project is to define the question clearly enough to make it answerable. . Even experts start with simple queries -- it shows up in the logs -- and if the answers from the simple ones are good enough, they stop there.
Natural language processing of queries tends to less-than-useful, because anyone who can ask a precise enough query to be process-able can and probably will write it in a query language. In the last few years, NLP has been much more effective at analyzing the documents and data sets, especially for entity extraction. I'm sure Wolfram will be using all their analytical tools on the source content, it's just that natural language queries are so attractive to reporters .
But I like the computational idea, because once you ask an interesting question, a web-wide (or even edu-wide) engine could bring together scattered content, and build some interesting answers. I know the genome infomatics people are using Medline as a data source, and it sounds like the Wolfram group is doing so in a much more ambitious way.
I'm looking forward to poking around in it and asking questions to see what it answers.