Can a single layer network approximate an arbitrary function? [closed]

Can a single layer network approximate an arbitrary function? [closed]



Can a network with a single layer of $N$ neurons (where $N le infty$, no hidden layers) approximate any arbitrary function so that this network’s error approaches 0 as $N$ approaches $infty$?



This question appears to be off-topic. The users who voted to close gave this specific reason:





$begingroup$
"no hidden layers" --> trick question
$endgroup$
– Hong Ooi
Sep 10 '18 at 0:13





$begingroup$
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
$endgroup$
– amoeba
Sep 11 '18 at 12:40






$begingroup$
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
$endgroup$
– amoeba
Sep 11 '18 at 12:43






$begingroup$
@amoeba please read the original revision and you will understand.
$endgroup$
– Firebug
Sep 11 '18 at 12:54





$begingroup$
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
$endgroup$
– user
Sep 11 '18 at 23:57




2 Answers
2



False: If there are no hidden layers, then your neural network will only be able to approximate linear functions, not any continuous function.



In fact, you need at least one hidden layer for a solution to the simple xor problem (see this post and this one).



When you only have an input and an output layer, and no hidden layer, the output layer is just a linear function of its weights since the activation function only acts on the inner product of the input with the weights, hence you can only produce linearly separable solutions.



N.B. It does not matter what your activation function are, the point is that no neural net with no hidden layer can solve the xor problem, since its solutions are non-linearly separable.





$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– gung
Sep 11 '18 at 0:18





$begingroup$
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
$endgroup$
– amoeba
Sep 11 '18 at 12:39





$begingroup$
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
$endgroup$
– guy
Sep 11 '18 at 14:04





$begingroup$
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
$endgroup$
– user
Sep 12 '18 at 11:17





$begingroup$
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
$endgroup$
– amoeba
Sep 12 '18 at 17:15




The Universal Approximation Theorem states that a neural network with one hidden layer can approximate continuous functions on compact subsets of $R^n$, so no, not any arbitrary function.





$begingroup$
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
$endgroup$
– Jason Borne
Sep 9 '18 at 17:47





$begingroup$
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
$endgroup$
– gunes
Sep 9 '18 at 17:49





$begingroup$
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
$endgroup$
– Jason Borne
Sep 9 '18 at 17:49





$begingroup$
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
$endgroup$
– user20160
Sep 10 '18 at 7:22





$begingroup$
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
$endgroup$
– Federico Poloni
Sep 10 '18 at 8:03

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Crossroads (UK TV series)

ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế