I adapted this code, but I was not happy with the results.
I took a different approach for pronounceability: I first built a table of decomposed english word into groups of letters. My separators are diphtongs/voyels and groups of consonants. I do the same for a given domain, and I check if each subset of letters can be found in my table of decomposed english words. If yes, I consider the word as pronounceable, and vice versa.
If I take your example funy, I decompose it into f/ fu/ fun/ uny/ ny/ y.
I notice that each group is used in an other english word and hence it's pronounceable.
However, some subsets are not that common. For example uny is used in only 6 english words that include puny, unyielding, bunyan. None of these words are very common.
On the contrary, unny (for funny) is used more often and in more common english words such as sunnyvale, runny, gunny
This thread's discussions gave me the idea to build a pronounceability index that would take in account the frequency a group of letters is used in english and the frequency at which these english words are used in the english books.
I assume 'funy' will have a much smaller pronounceability index than 'funny', and this index could be used to have an idea of how easily it would pass the radio test.
I know it's not as straightforward as it can seem. One big problem I face are brands that have become part of our common vocabulary (think IBM, realtor, tumblr, flickr...) Clearly, IBM is not pronounceable, yet it would pass the radio test. IBMer would probably pass it too, and it's not really prounounceable either.