<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Two Random Thoughts on Artificial Intelligence</title>
	<atom:link href="http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/</link>
	<description>taking the most charitable view of those who disagree</description>
	<lastBuildDate>Mon, 21 Dec 2020 14:00:34 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.0.32</generator>
	<item>
		<title>By: justin</title>
		<link>http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/#comment-468204</link>
		<dc:creator><![CDATA[justin]]></dc:creator>
		<pubDate>Wed, 14 Sep 2016 16:18:25 +0000</pubDate>
		<guid isPermaLink="false">http://www.arnoldkling.com/blog/?p=7468#comment-468204</guid>
		<description><![CDATA[What exactly does deep learning have to do with large randomized controlled trials. Deep learning doesn&#039;t mean large datasets. The experiments can (and probably should) be analyzed without any AI or ML techniques.]]></description>
		<content:encoded><![CDATA[<p>What exactly does deep learning have to do with large randomized controlled trials. Deep learning doesn&#8217;t mean large datasets. The experiments can (and probably should) be analyzed without any AI or ML techniques.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mike Linksvayer</title>
		<link>http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/#comment-468195</link>
		<dc:creator><![CDATA[Mike Linksvayer]]></dc:creator>
		<pubDate>Wed, 14 Sep 2016 00:59:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.arnoldkling.com/blog/?p=7468#comment-468195</guid>
		<description><![CDATA[(2) good! Turning over city streets to cars was a contentious mistake, largely forgotten https://www.researchgate.net/publication/236825193_Street_Rivals_Jaywalking_and_the_Invention_of_the_Motor_Age_Street]]></description>
		<content:encoded><![CDATA[<p>(2) good! Turning over city streets to cars was a contentious mistake, largely forgotten <a href="https://www.researchgate.net/publication/236825193_Street_Rivals_Jaywalking_and_the_Invention_of_the_Motor_Age_Street" rel="nofollow">https://www.researchgate.net/publication/236825193_Street_Rivals_Jaywalking_and_the_Invention_of_the_Motor_Age_Street</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Foobarista</title>
		<link>http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/#comment-468193</link>
		<dc:creator><![CDATA[Foobarista]]></dc:creator>
		<pubDate>Tue, 13 Sep 2016 21:23:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.arnoldkling.com/blog/?p=7468#comment-468193</guid>
		<description><![CDATA[I&#039;ve already seen the reckless pedestrian thing in Mountain View.]]></description>
		<content:encoded><![CDATA[<p>I&#8217;ve already seen the reckless pedestrian thing in Mountain View.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: ricardo</title>
		<link>http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/#comment-468188</link>
		<dc:creator><![CDATA[ricardo]]></dc:creator>
		<pubDate>Tue, 13 Sep 2016 14:19:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.arnoldkling.com/blog/?p=7468#comment-468188</guid>
		<description><![CDATA[&quot;Colin Allen suggested that if self-driving cars are programmed to stop for pedestrians, and pedestrians know this, pedestrians could become more reckless and aggressive.&quot;

In turn suggesting that a utilitarian* AI might decide to deliberately take one of these reckless pedestrians out, pour decourager les autres...

*(wrt human utility, that is).]]></description>
		<content:encoded><![CDATA[<p>&#8220;Colin Allen suggested that if self-driving cars are programmed to stop for pedestrians, and pedestrians know this, pedestrians could become more reckless and aggressive.&#8221;</p>
<p>In turn suggesting that a utilitarian* AI might decide to deliberately take one of these reckless pedestrians out, pour decourager les autres&#8230;</p>
<p>*(wrt human utility, that is).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tom G</title>
		<link>http://www.arnoldkling.com/blog/two-random-thoughts-on-artificial-intelligence/#comment-468187</link>
		<dc:creator><![CDATA[Tom G]]></dc:creator>
		<pubDate>Tue, 13 Sep 2016 13:45:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.arnoldkling.com/blog/?p=7468#comment-468187</guid>
		<description><![CDATA[There&#039;s a fine free Coursera course on Machine Learning (by Andrew Ng, now at China&#039;s Baidu) that can teach a lot even if the (quite tough) problems are not solved.

I&#039;m quite sure that the large randomized trials of many potential customers will result in temporarily (/permanently?) more effective advertisement, as well as better search results.

However, for creating Artificial Intelligence to solve problems that humans aren&#039;t solving so well right now, the key need is massive training -- multiple input variations which are all &quot;correct&quot;.  After this comes post training input with &quot;correct&quot; output known, to be compared with what the AI outputs.  And even after this, the AI can encounter input outside of the training data which is then hugely misinterpreted.

In the machine learning course, one learns that tech and algorithms are dominated by size of training data.

My own work-hobby is teaching an AI to become an English Tutor, so I&#039;m learning more about IBM&#039;s Watson right now (lots of free info &amp; courses), but it&#039;s going pretty slowly, since I do have a day job, plus my blog reading addiction.  Getting input training for any AI course is a huge undertaking - definitely economy of scale.  However, there&#039;s also a big space for open data &quot;training data&quot;.  

The 60 million games of GO that were used to train the AI GO player are probably somewhat or very much publicly available.  Similarly Chess games.
I still haven&#039;t heard of a &quot;game playing&quot; AI that can play various poker and other card games, as well as chess &amp; go, at a very high level.


Japanese children were filmed deliberately getting in the way of mobile robots (which were trying to serve food?)  when the parents weren&#039;t watching.  Robots programmed to stop for pedestrians will be blocked, some, and those that can not defend themselves WILL be attacked, when authorities are not looking. 

Now I&#039;m thinking that if there are cameras which activate &quot;in self defense&quot;, the non-attacking robot might capture and broadcast pictures of the offenders, including facial recognition and Facebook identification plus (small?) fines for harassment.  Perhaps merely public shaming would be enough to reduce the human anti-robot antics to a low enough level to be easily tolerable.]]></description>
		<content:encoded><![CDATA[<p>There&#8217;s a fine free Coursera course on Machine Learning (by Andrew Ng, now at China&#8217;s Baidu) that can teach a lot even if the (quite tough) problems are not solved.</p>
<p>I&#8217;m quite sure that the large randomized trials of many potential customers will result in temporarily (/permanently?) more effective advertisement, as well as better search results.</p>
<p>However, for creating Artificial Intelligence to solve problems that humans aren&#8217;t solving so well right now, the key need is massive training &#8212; multiple input variations which are all &#8220;correct&#8221;.  After this comes post training input with &#8220;correct&#8221; output known, to be compared with what the AI outputs.  And even after this, the AI can encounter input outside of the training data which is then hugely misinterpreted.</p>
<p>In the machine learning course, one learns that tech and algorithms are dominated by size of training data.</p>
<p>My own work-hobby is teaching an AI to become an English Tutor, so I&#8217;m learning more about IBM&#8217;s Watson right now (lots of free info &amp; courses), but it&#8217;s going pretty slowly, since I do have a day job, plus my blog reading addiction.  Getting input training for any AI course is a huge undertaking &#8211; definitely economy of scale.  However, there&#8217;s also a big space for open data &#8220;training data&#8221;.  </p>
<p>The 60 million games of GO that were used to train the AI GO player are probably somewhat or very much publicly available.  Similarly Chess games.<br />
I still haven&#8217;t heard of a &#8220;game playing&#8221; AI that can play various poker and other card games, as well as chess &amp; go, at a very high level.</p>
<p>Japanese children were filmed deliberately getting in the way of mobile robots (which were trying to serve food?)  when the parents weren&#8217;t watching.  Robots programmed to stop for pedestrians will be blocked, some, and those that can not defend themselves WILL be attacked, when authorities are not looking. </p>
<p>Now I&#8217;m thinking that if there are cameras which activate &#8220;in self defense&#8221;, the non-attacking robot might capture and broadcast pictures of the offenders, including facial recognition and Facebook identification plus (small?) fines for harassment.  Perhaps merely public shaming would be enough to reduce the human anti-robot antics to a low enough level to be easily tolerable.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
