Chess Merit Badge – BSA Troop 55, Houston

Chess Merit Badge – Resources and Links

The next Chess Merit Badge class that I will conduct at Troop 55 Houston will be
Sunday, Oct. 26.   at St. John Devine.
Achievements for History, Basics & Strategy: 1:00 pm-3:30 pm.  
Achievement to Play, record, and review games: 3:30-5:30 pm.  
All scouts and adults are welcome to join in the games.  
Please sign up or send an email to me: raseysm (at) wiserways.com

 

References:

USScouts has a good workbook.  Print it out.  Fill out what you can.  Bring it to the class.
http://www.usscouts.org/usscouts/mb/worksheets/Chess.pdf

This is a good web site for training and learning strategies.
All the problems are “find the best move.”  About 1/3 of the problems are “directed mate.”  2/3 of the problems are find an exchange to improve your position by at least 2 points, like get a rook for a knight.
http://chesstempo.com/chess-tactics.html

This one is a second choice. It is better for beginners to learn
Mate in 1 and Mate in 2 problems, though they are not all Mate problems.  Start with Level 1 for easy problems like “mate in 1.”
http://chessok.com/?page_id=359

Training on Openings
See my index page for Chess Openings.
These are a set of PDF’s showing board positions at each 1/2 move, White and Black.  For each position there are 2 to 5 advantageous responses and several to avoid.

See this website for looking up Openings by Name.
http://www.chess.com/tactics/?id=732&practice=1
click the Learn menu, Book Openings by name.

An Opening Tree website.   It has favorite moves from every opening position.   The only draw back is it does not give the name the opening position.
http://chessok.com/?page_id=352

image

Top Level index of Chess Openings

First Moves for White and Black’s response (PDF)

After: 1.e4 c5 (Sicilian Defense)  (PDF)  White’s responses.

In each PDF, the position a player faces is laid out on a board.   4 to 8 moves that are most popular are shown with arrows.  Green arrows are the best of the set, Cyan good ones, Red arrows are potential moves to weaker positions.   There are many more possible moves, but these are the most common and other moves are presumed to be weak.

image

For instance, White has many moves to open.
c4 (English Opening) is arguably the best in that if Black  replies with Nf6, White counters with Nc3 in a position White wins 40% and Black wins 29%.

But c4 isn’t the only good move.   e4 (Kings Pawn) is the most popular, but Black will probably respond with c5 (Sicilian Defense) leaving White after 2… Nc3 with a 37%/34% advantage.
Other good and common white first moves are  d4 or Nf3 or g3.

But should White open with one of the red arrows, Black could take advantage.
For instance with f4 Black will probably counter with d5 or g6 and White’s 2nd move will best be Nf3 leaving White in a 35%/39% disadvantage.
Likewise, b3, Nc3, b4 also lead to weak positions for white.

Moves not diagramed, such as Na3 or h4 are rarely done and are probably roads to defeat.

In this first PDF, what follows is White’s first moves (as illustrated above) and  are the Black best moves for each of the 5 best white moves (c4, e4, d4, NF3, g3).
First Moves for White and Black’s response (PDF)

The other PDFs all start from commonly used positions after the first move of White and Black, such as
After: 1.e4 c5 (Sicilian Defense)  (PDF)  White’s responses.

 

 

 

 

 

Programming Main Page

NoSQL Reference Links

http://globalbigdataconference.com/news/6712/its-time-to-say-yes-to-nosql.html

It’s Time To Say Yes to NoSQL

Data Volume,  (scale out on commodity hardware)
Elasticity, (scale up, scale down for cloud work)
Administration, (simpler data models lead to lower admin)
Flexible Data Models,  (less rigid than RDBMS, allows for easier innovation)
Economics (cheap generic servers)

 

Two Cons against NoSQL. Part I.

http://www.odbms.org/blog/2012/10/two-cons-against-nosql-part-i/

Prof. Roberto V. Zicari is editor of ODBMS.ORG (www.odbms.org) .

Transfer between NoSQL databases is hard.  But it is early.   JSON is an early standard that is making it easier.

There is not standard way to access a NoSQL data store.   How do you get a query of the database into Excel?

Comment from :Dwight Merriman: …Also I think there is a bit of an illusion of portability with relational. There are subtle differences in the SQL, medium differences in the features, and there are giant differences in the stored procedure languages.

Export to Excel:

Dwight Merriman: So with MongoDB what I would do would be to use the mongoexport utility to dump to a CSV file and then load that into excel. That is done often by folks today. And when there is nested data that isn’t tabular in structure, you can use the new Aggregation Framework to “unwind” it to a more matrix-like format for Excel before exporting.

You’ll see more and more tooling for stuff like that over time. Jaspersoft and Pentaho have mongo integration today, but the more the better.

A Critique of the BERKLEY EARTH scalpel: Cut away the signal, Analyze the noise

Composed 120814 – BEST Part 3 (unpublished),
Published to this blog on 140316

Until I see a defense of the Berkley Earth BEST process from the context of Fourier Analysis (Frequency content) and an Information theory, I cannot trust any conclusion BEST reaches on time scales longer than 10 years.

I stated my theoretical objection to the BEST scalpel back on April 2, 2011 in Expect the BEST, plan for the worst.

I believe the fourier domain of the BEST process has received far too little scientific and theoretical attention. From time to time I attempt to focus on the frequency domain of BEST until I see a defense. For instance, this post at Climate Audit, Nov. 1, 2011

I have a fundamental problem with the use of any scalpel and suture technique in the context of determining long term temperature trends. The basis for my objections are based upon Fourier Analysis and information content. My argument is summarized in these bullet points:

1. The Natural climate and Global Warming (GW) signals are extremely low frequency, less than a cycle per decade.

2. A fundamental theorem of Fourier analysis is
frequency resolution dw/2π Hz = 1/(N*dt) .
where dt is the sample time and
N*dt is the total length of the digitized signal.

3. The GW climate signal, therefore, is found in the very lowest frequencies, low multiples of dw, which can only come from the longest time series.

4. Any scalpel technique destroys the lowest frequencies in the original data.

5. Suture techniques recreate long term digital signals from the short splices.

6. Sutured signals have in them very low frequency data, low frequencies which could NOT exist in the splices. Therefore the low frequencies, the most important stuff for the climate analysis, must be derived totally from the suture and the surgeon wielding it. Where is the low-frequency original data to control the results ?

Perhaps it can be argued, demonstrated, and proved, that somehow the low frequencies were extracted, saved, and returned to the signal intact. Statements like the following from Muller (WSJ Eur 10/20/2011) make me believe that most people do not appreciate this problem.

Many of the records were short in duration, … statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

“Avoided data-selection bias” – and Embraced high frequency selection bias and created a bias against low frequencies. There is no free lunch here. Look at what is happening in the Fourier Domain. You are throwing a way low frequency climate signal and keeping the higher frequency weather noise. How can you possibly be improving climate signal/noise ratio?

Climate is a low frequency signal. The farther you go back in time, the lower frequencies you need in your analysis. Yet what does BEST do with all their data? Throw all the temperature records into the Cuisinart, chop them into bits “avoid data selection bias” (and induce who knows what sort of bias in the choice of where to use the scalpel), and unavoidably eliminating all the low frequencies in the data.

Yes, somehow they take all these fragments and glue them back together to be able to present a graph of temperatures from 1750 to 2010. That graph has low frequency data – but from where did it come? The low frequencies are counterfeit – contamination in the gluing process, manufacturing what appears to be low frequency signal from fitting high frequency data. How can low frequencies can be recreated from lots of fragments containing just high frequencies?

Phase vs Frequency is the dual formulation of Amplitude vs Time. There is a one to one correspondence. If you apply a filter to eliminate low frequencies in the Fourier Domain, and a scalpel does that, where does it ever come back?

A beautiful example of frequency content that I expect to be found in century scale uncut temperature records is found in Lui-2011 Fig. 2.  also found in WUWT 12/7/2011: In China there are no hockey sticks The grey area on the left of the Fig. 2 chart is the area of low frequency, the climate signal. In the Lui study, a lot of the power is in that grey area. It is this portion of the spectrum that BEST’s scalpel removes! Fig. 4 of Lui-2011 (at JoNova) is a great illustration of what happens to a signal as you add first the lowest frequency and successively add higher frequencies.


I’m a geophysicist. Geophysical seismic processing is heavily dependent upon Fourier Analysis. What I see BEST doing is eliminating low frequencies with the scalpel, performing some magic semi-regional homogenization of the high frequency segments behind the scenes, then returning with a result with “better” low frequency in it. I would sooner believe that the 2nd Law of Thermodynamics could be violated. How did they get something for nothing? How did they throw away low-frequency only to get back “better low frequencies” in the result?

In petroleum exploration, seismic data is recorded with band-pass instrumentation. The highest frequencies cut off at anywhere from 60 to 250 Hz, depending upon the care and expense in acquisition. But the data are also limited on the low side, with 6 to 10 Hz as the lowest frequency the receivers can record. Geophysicists gather lots of data. Many shots heard from many receivers repeated from many locations. There is lots of noise in the recordings, but by “stacking” the data, repeating data from the same place over many shots, signal to noise increases. One of the key steps in the processing is finding the best “stacking velocity”, average earth velocity to use to correct for “move-out”, differences between Source-receiver pairs and migration.

I explain this because what BEST is attempting to do is very similar to what seismic processors do when they Invert the seismic to obtain a full impedance profile. What must be understood is to get a full inversion you need two things.
(1) the band pass seismic data for high-frequency detail, and
(2) the velocity-density profile which provides the low frequency information.
When we invert, we integrate the seismic data, but that means we integrate the noise, too, so error grows with time. For (2) we get the velocity information from the study of velocities that maximizes the signal-to-noise in the stacked data. Density can be estimated from anticipated rock, depth, and fluid content. It is very model dependent, but it is controlled by the stacking and move-out process and is an independent control on the cumulative inversion error from band pass data in (1).

Returning to BEST, all those fragments of temperature records are equivalent to the band-pass seismic data. Finding the long term temperature signal is equivalent to inverting the seismic trace, but the error in the data must also accumulate as you go back in time. Since the temperature record fragments are missing the lowest frequencies, where is the low frequency control in the BEST process? In the seismic world, we have the velocity studies to control the low-frequency result.

What does BEST use to constrain the cumulating error? What does BEST use to provide valid low-frequency content from the data? What is the check that the BEST result is not just a regurgitation of modeler’s preconceptions and contamination from the suture glue? Show me the BEST process that preserves real low frequency climate data from the original temperature records. Only then can I even begin to give Berkley Earth results any credence.

 

Here is a map of the stations from the Global Summary of Day TAVG and the Monthly Climatic Data of the world, 25,569 sites, colored by elevation. No distinction yet between stations with 1 year of records or 50 years.

Here is a map of the stations from the Global Summary of Day TAVG and the Monthly Climatic Data of the world, 25,569 sites, colored by elevation. No distinction yet between stations with 1 year of records or 50 years.