All posts by erickclasen

About erickclasen

This is my Blog for writing about technical items and my other thoughts. I also use it to experiment with WordPress.

Dr. Seheult Coronavirus Updates

Very informative series of videos. All of them dealing with Covid-19 to date, Dr. Seheult releases these update videos regularly. He is actively treating Covid-19 patients, from what I remember he is an internist and with pulmonary and sleep specialties. Below is a sampling of the videos that I found interesting, mostly centered on prevention and treatment. Covering new approaches such as Remdesivir and Choloquine. Supplements are also covered for general health and use for helping the bodies immune system. Plus some general information on how viruses work, which I had long forgotten about over the years. Some of the ideas in the videos are fairly unique, such as the importance of sleep for boosting the immune system and the positive effects of forest bathing.

UPDATE: There has been some censorship of Coronavirus videos on YouTube created by Dr. Seheult which thankfully are still all available on MedCram

 

Linux Mail From the Command Line

Local Mail Using Postfix and Mail

Mail on the command line. This was once a thing that was used much more often. When I was in college in the 1990’s it was one of the easiest ways to get mail when on campus and off. It was taught to technical and non-technical people alike as part of orientation and you were given written instructions right at registration time. Instructions came along with the email account that they made up for you using a student id number followed by @binghamton.edu. In fact, finally in the last year I was there they got rid of this cumbersome email and started allowing users to have email with their real names.

Getting to mail back then from the command line involved logging into your Unix account from one of the many terminals spread throughout the campus. Or using the  insecure but, OK at the time rlogin, remotely via dialup. Then you could either use mail or pine which was a bit more sophisticated as it was based off the pico editor, of which the popular nano editor is a derivative. It at least has somewhat of an interface that accepts up and down arrow movement and displays the shortcuts on the bottom. The standard mail program is a bit like vi, spartan but still useful.

Having access to mail locally on the command line is useful when you might be running CRON tasks or any other automated scripts that call other code as it allows you to get notified of when they have run and most importantly if they have had errors.

The other alternative is to set up ssmtp and have mail sent out of the local machine using SMTP from another established account. Of course you can also set up a full blown mail server but, that can be overkill if you are just monitoring what is happening on a few machines that you regularly log in to.

Setting up Local Mail

Below is a great Github post on how to set up local mail on a Linux Machine. I followed it and added a variation to get local mail running using the command line mail program.

It works great.  I really liked the instructions, very easy to set up. I was glad I found this as, I thought it might be tricky and with these instructions it was a few minutes on each machine.

Setup a Local Only SMTP Email Server (Linux, Unix, Mac)

 

My Additions

Server

For use with mail the program, it might not be necessary to have a local host.com in the host file. I have not tested this.

I followed the tutorial up to and including the step of restarting postfix.

Then I installed mail instead of Thunderbird.

On the server, which only has a CLI,  I wound up using mail instead of T-bird, installed via sudo apt-get install mailutils
It can be tested by sending a message to yourself by using…
mail -s “test” (your user name)@localhost

Hit enter when in the screen to bypass CC , type something and end by using Ctrl D.

Then enter the command

mail

…and you should see the email. Hit enter at the? prompt and the message is presented.

Enter q to quit.

Another test message can be sent to test if a message to any address gets sent to you. As long as the domain is local host it will work and catch it. Other domains will fall and send you an email from the system reporting the email as undeliverable.
It works great to get the CRON messages on the machine.
Just type mail and you get a CLI email list. mail basics are, q to quit, m to compose, enter and spacebar to move through messages. Entering question mark ?, brings up command help.

Desktop Install

One gotcha that caught me is that I already had T-bird installed and therefore it had a default SMTP server already. For me this required what I would call step 6A to add a local STMP server.
6A. In the pane above “Account Actions” scroll, using the bar to the bottom “Outgoing Server (SMTP)”.
Click Add
For description I wrote Local SMTP
Server Name: localhost
Port: 25
Username : (the user name)@localhost
Authentication Method: Password, transmitted insecurely
Connection Security: None

Then I went back into the account settings for the mail set up in step 6 and set the Outgoing Server (SMTP) to the Local SMTP

Host File Aliases

Also in /etc/hosts you can put in localhost.com as a alias and it works fine, like this…
127.0.0.1       localhost localhost.com

This is the way to put in aliases in a host file, for example you can have the machine name and then a shortcut to it if you have it set to a static IP. This way you can just type server to SSH to it and use that as a short name wherever you want to in scripts and etc.

192.168.1.10  Dell-Optiplex-620     server

127.0.0.1 localhost localhost.com

 

 

Bull market 2009 to 2020

Trading Resources Books

Speculation in stocks will never disappear. It isn’t desirable that it should. It cannot be checked by warnings as to its dangers. You cannot prevent people from guessing wrong no matter how able or how experienced they may be. Carefully laid plans will miscarry because the unexpected and even the unexpectable will happen. Disaster may come from a convulsion of nature or from the weather, from your own greed or from some man’s vanity; from fear or from uncontrolled hope. – Reminiscences of a Stock Operator


This post is a trading resource dump on books for trading 

I have some good books in the list. The ones that cover the psychological aspects of trading are worth the time to read if you have gotten past the basics of trading. It helps if you have traded a little at least to have some hands on experience with it. Best to start out with a few hundred dollars and build up the account as you get the hang of trading. Adding to the account slowly. I wish I had read these books  in the beginning, just when I was trying the first half dozen trades, at the same time that I was learning about the technical aspects of trading.

 

Subject: Road less traveled M. Scott Peck

https://www.apnamba.com/Ebooks-pdf/The%20Road%20Less%20Traveled.pdf

The first section on discipline is a good read for the psychology of trading. Think discipline in terms of trading.


Subject: Technical Analysis (TA) of financial markets John Murphy

Classic TA book. This along with Perry J. Kaufman’s book New Trading Systems and Methods will be good references for the nuts and bolts of trading.

http://194.145.209.129/bk/wp-content/uploads/2020/02/download.php?file=Technical-Analysis-of-the-Financial-Markets-8freebooks.net_.pdf

Subject: Another TA Book, Open Source

This book is a good reference book. It compiles a lot of information that is in the public domain, mostly via Wikipedia in one place. At the end of each chapter there are good notes, references and further reading. It is good to have this on hand when you want to look something up quick or for beginners to get an overview of technical analysis.

http://www.mrao.cam.ac.uk/~mph/Technical_Analysis.pdf

In the Markets: Confessions of a Samurai Trader Edward Alan Toppel

Worth a read, especially for anyone that has been exposed to Asian culture. It still makes sense if you’re not familiar with the culture but, if you are it’s going to be a deeper read.

Confessions of a Samurai Trader Ebook https://www.forexfactory.com/attachment.php/2566801?attachmentid=2566801&d=1510983772

Trading for a living by Elder Alexander

Very good, Elder Alexander is a Psychologist turned trader. He is also the inventor of the Triple Screen Trading System, which is outlined in the book.

This is in download PDF http://www.saham-indonesia.com/Ebooks/Technical%20Analysis/Elder%20Alexander%20-%20Trading%20For%20A%20Living.pdf

 Reminiscences of a Stock Operator Book by Edwin Lefèvre Zen

There is nothing like losing all you have in the world for teaching you what not to do. And when you know what not to do in order not to lose money, you begin to learn what to do in order to win. Did you get that? You begin to learn!

Very good, a must read! This book is supposedly handed out to new employees at Goldman Sachs to read as a first assignment. I can see why, it really is an eyeopener. It is a book I wished that I had read right in the beginning of starting to consider trading myself.

https://ia803009.us.archive.org/4/items/JesseLivermoreReminiscencesOfAStockOperator/Jesse%20Livermore%20Reminiscences%20Of%20A%20Stock%20Operator.pdf

General Economics and Investing

While not necessarily for trading, it’s helpful to have a broad selection of different types of knowledge in your latticework of the mind as Charlie Munger calls it. In case you didn’t know, Charlie Munger is Warren Buffet’s right hand man at Berkshire Hathaway.

Investing , the last liberal art by Robert G. Hagstrom

This table of contents is what got my attention with this book. I was a bit caught of guard at first with the topics and the connection they might make to investing. But, a short skim confirmed the author was right on target with all of the topics covered.

Table of Contents: Investing: The last Liberal Art
Table of Contents: Investing: The last Liberal Art

I didnInvesting, the last liberal art’t know what to think of this book when I first saw it. But, it’s different from most investing books in a good way. It’s more about building a mindset, a latticework in your mind to pull from to better think about investing. Clearly, knowing other subjects beyond economics and finance are helpful to have a background of general knowledge to be able to pull ideas from. For me, I agree as this has been my own experience. This book went from one that I was skeptical about to a favorite after a few chapters.Initially the chapter titles caught my attention, as they were unlike any ones that I have seen in other investing books.

 

Principles of Economics by Alfred Marshall

Originally from 1890, it’s a classic. It’s still used in some college curriculums today. This book can provide a background on economics. With trading, you need to be able to understand the view from above as well. Understanding how economics works provides a high altitude view from far above the landscape of fundamentals and technical analysis. It’s important to a least have some understanding of the bigger picture, a macro view of economics from the beginning when you trade.

This is the 8th edition of what is regarded to be the first “modern” economics textbook, leading in various editions from the 19th into the 20th century. The final 8th edition was Marshall’s most-used and most-cited.

http://files.libertyfund.org/files/1676/Marshall_0197_EBk_v6.0.pdf

Crypto Compares Big Gains Q1 2019

Trading Resource Videos

There are many thousands of people who buy and sell stocks speculatively but the number of those who speculate profitably is small. As the public always is “in” the market to some extent, it follows that there are losses by the public all the time. The speculator’s deadly enemies are: Ignorance, greed, fear and hope. All the statute books in the world and all the rules of all the Exchanges on earth cannot eliminate these from the human animal. Accidents which knock carefully conceived plans skyhigh also are beyond regulation by bodies of coldblooded economists or warm-hearted philanthropists.– Reminiscences of a Stock Operator

 

Trading Resource Videos

The following post contains video resources. Some of this material especially the material from Francis Hunt and David Paul on the psychology and mindset of trading is quite important. It is best to learn about the psychology and mindset of trading while learning other typical trading topics such as technical analysis and fundamentals. It is just as important in the long run and having discipline and the right mindset goes a long way to preventing losses and maximizing gains over the long haul.

Channels

Josh Olszewicz: Crypto + Technical Analysis, Building your trading toolbelt

Josh Olszewicz produces market updates on a regular basis cover BTC and ETH primarily. Plenty to learn from and in my opinion, reliable insights.

https://www.youtube.com/channel/UC587BAG9cLTYtJ7Q4CqcOnw

Francis Hunt: The Crypto Sniper

Francis Hunt produces periodic crypto market updates and actually trades in legacy markets as well and provides useful insights from both realms and also currencies plus gold and silver.

https://www.youtube.com/channel/UCdC4a2KquFV1F4O21mW3K7g

Aswath Damodaran: Fundamentals and Value Investing in Stocks.

He is a professor, so get ready to study. Many videos on his channel. A few updates here and there but , most of it is in depth college level training on topics such as valuation and analyzing the fundamentals of companies. Good information for a background on the stock market. Plenty of videos to pick and choose from as you want to dive into a specific topic.

https://www.youtube.com/channel/UCLvnJL8htRR1T9cbSccaoVw

 

UKspreadbetting

Basics. Well done videos, most of them nice and short and covering specific topics. Everything that applies to spreadbetting can be applied to trading as well so don’t let the name fool you.

https://www.youtube.com/channel/UCnKPQUoCRb1Vu-qWwWituGQ

Trading 212

Basics. Watched a few videos off of this channel a bit of overlap with UKspreadbetting but short concise videos. Worth looking at.

https://www.youtube.com/channel/UCfWQuGYMfhZk9qq5x00sb9w


Series Worth Watching

Francis Hunt Interview Series on UK Spread Betting

Especially these two in the series are very important…
Trading Secrets of the Mind Master the Emotional Side of Trading
Improving the Mindset Game in our Trading

David Paul on UK Spread Betting

More David Paul Videos

 

https://www.youtube.com/watch?v=MGglyvc8d58&list=WL&index=113&t=0s

Subject: David Paul Tips from video

Asset above 89 day MA. Market above 21 day MA And rising , will get hit rate to 80 Probability matrix Pattern to finesse entry . 1-2 pct at risk per trade. Plan and repeated perfect execution. Build neural pathways after 8-30 trades…….. William O’Neil How to make money in stocks……from The Psychology of trading and investing

Audio book: Trading For a Living, Psychology,Trading Tactics, Money Management

 

Jim Simons

Not trading information specifically but, he is a legend. Mathematician, code breaker  turned trading genius. He co-founded Renaissance Technologies in East Setauket, New York, a town away from where I grew up. Renaissance Technologies, Ren Tech, makes huge yearly gains in it’s Medallion Fund. The company does a lot of quant work and hires PhD scientists, mathematicians and etc. Working for this company would be amazing to say the least. So much to learn there working with the talented people he has helped to pick and shape over the years.

Renaissance’s flagship Medallion fund, which is run mostly for fund employees,[8] is famed for the best track record on Wall Street, returning more than 66 percent annualized before fees and 39 percent after fees over a 30-year span from 1988 to 2018.[9][10]

DMAC BTC Trading Example With Tuning

I got this example originally called simple-strat.py from a Datacamp article . I have my changes to the code up on Github.

Tunable DMAC

Basically the modifications to the original code incorporates a Dual Moving Average Crossover (DMAC) strategy that adjusts the averages to pick optimum values . DMAC is a very basic trend trading strategy. Other types of strategies would include, mean reversion, hedging such as pairs trading, arbitrage or buying and selling across the bid-ask spread as a market maker, so called earning the spread.

DMAC Basics

A dual moving crossover is exactly what the name says. There are two moving averages, one short and one long. When the short one rises above the long one, it is a buy signal. When they cross the other way, guess what, sell. This is almost about as dirt simple as a strategy can get. Single moving average, where price crosses one average would be the simplest moving average strategy. DMAC used in the example is not  fancy at all, no stops, no screening, no failure mode protections. I have found that about 10% of designing a good algorithmic strategy is in the strategy, another 10-20% is in the tuning and generalizing, backtesting. The rest is protecting the strategy from itself, such as screening out bad trades. The absolute worst is price shocks, which can’t be predicted.

Tuning

Initially it was coded to work with stocks but, I revised it to take in BTC instead. I also took it one level up and had it automatically tune for the best combinations of the short and long average for the DMAC stratagy. This is a bit better than sitting there and fiddling the parameters by hand. The pitfall is that the model can be overfit to the backtest data and then perform poorly on new data. It is best to have a model that generalizes a bit. Sometimes this can be accomplished by actually dumbing the process down a bit or using a stochastic approach. More parameters in the model the more likely overfit will occur as well.

Aside on Ensemble Backtesting

One approach that I typically use when tuning an algorithmic models parameters is to use an ensemble approach. I have used this successfully with genetic algos, such as differential evolution. I will have the backtest tuning take random slices of the time series from random points within the time series. From this I will run backtests on each one and then average the results together to form the ensemble. This helps to generalize but, won’t work for every kind of model. This has to do with the nature of local maxima and minima that the model might rise up  to or fall into. With some models averaging may just average to the plain and not the peaks and valleys. This has to be looked at carefully. But, I am getting off topic here.

 DMAC versus Buy and Hold

The idea here is to do some Monday morning quarterbacking. When I was in industry this was always a thing on Mondays. No matter what the sport, everyone threw around the coulda, woulda, shouldas. Probably more so if there was money riding on the game. Well if you are a trader or investor, there is real money riding on the game so sometimes it is worth looking back and seeing how a trading strategy would have worked against the classic long term buy and hold. a.k.a HODL…hold on for dear life.

If any of the terminology or details above seem fuzzy, check out the Datacamp article.

Fantasy Account

If we bought BTC on 1/1/2019 and sold on 12/31/2019 and had a magic wand to make the perfect DMAC tuned model, would we have done better than buy and hold?

We start with $100K on 1/1/2019 and cash out on 12/31. What do we have?

DMAC with a 29 day short and 59 day long average…

Short AVG,Long AVG,Best Bank 29 59 902908.9599609375

$900K, the clear winner.

Buy and Hold comes in at $182K

HODL Gains: 1.8258754145331861

Obviously, this is perfect trades, like a Carnot cycle engine without friction, it’s not realistic . In the real world there are fees, slippages, price shocks, picking bad parameters and other gotchas.

But, even if you threw a dart at pairs of numbers for the averages, at least for 2019, odds are the DMAC would have beat buy and hold for the year. This is not always the case, as in years past a buy and hold would have beat the DMAC. See the printout here for a 2010-2017 run.

BTC Price with Average and Buy and Sell points
2019 BTC Price with Average and Buy and Sell points Up Arrow = BUY, Down Arrow = SELL
Amount of cash made on paper
2019: Amount of cash made on paper

output from simple-strat-loop-backtest-2019.py

Get Data
Run Model
Short AVG,Long AVG,Best Bank 1 1 100000.0
Short AVG,Long AVG,Best Bank 1 2 611522.802734375
Short AVG,Long AVG,Best Bank 1 4 646585.4736328125
Short AVG,Long AVG,Best Bank 1 16 652523.6328125
Short AVG,Long AVG,Best Bank 1 17 689456.2744140625
Short AVG,Long AVG,Best Bank 3 89 690951.904296875
Short AVG,Long AVG,Best Bank 4 90 696133.9599609375
Short AVG,Long AVG,Best Bank 6 86 707596.8994140625
Short AVG,Long AVG,Best Bank 9 82 709683.1787109375
Short AVG,Long AVG,Best Bank 9 83 732913.4521484375
Short AVG,Long AVG,Best Bank 10 83 742904.150390625
Short AVG,Long AVG,Best Bank 13 17 746094.677734375
Short AVG,Long AVG,Best Bank 14 81 762164.5263671875
Short AVG,Long AVG,Best Bank 22 95 781311.2548828125
Short AVG,Long AVG,Best Bank 27 55 814494.62890625
Short AVG,Long AVG,Best Bank 29 56 820056.7626953125
Short AVG,Long AVG,Best Bank 29 57 886127.4658203125
Short AVG,Long AVG,Best Bank 29 59 902908.9599609375
signal   short_mavg    long_mavg  positions
Date
2019-01-01     0.0  3843.520020  3843.520020        NaN
2019-01-02     0.0  3893.464722  3893.464722        0.0
2019-01-03     0.0  3874.556885  3874.556885        0.0
2019-01-04     0.0  3870.347046  3870.347046        0.0
2019-01-05     0.0  3865.316553  3865.316553        0.0
2019-01-06     0.0  3900.535889  3900.535889        0.0
2019-01-07     0.0  3918.351946  3918.351946        0.0
2019-01-08     0.0  3932.413940  3932.413940        0.0
2019-01-09     0.0  3943.845323  3943.845323        0.0
2019-01-10     0.0  3917.353247  3917.353247        0.0
2019-01-11     0.0  3896.445268  3896.445268        0.0
2019-01-12     0.0  3876.849915  3876.849915        0.0
2019-01-13     0.0  3851.934777  3851.934777        0.0
2019-01-14     0.0  3841.514596  3841.514596        0.0
2019-01-15     0.0  3827.458643  3827.458643        0.0
2019-01-16     0.0  3816.680405  3816.680405        0.0
2019-01-17     0.0  3808.555908  3808.555908        0.0
2019-01-18     0.0  3800.182766  3800.182766        0.0
2019-01-19     0.0  3796.413587  3796.413587        0.0
2019-01-20     0.0  3786.643591  3786.643591        0.0
2019-01-21     0.0  3776.614490  3776.614490        0.0
2019-01-22     0.0  3768.794611  3768.794611        0.0
2019-01-23     0.0  3760.808891  3760.808891        0.0
2019-01-24     0.0  3754.144582  3754.144582        0.0
2019-01-25     0.0  3747.969434  3747.969434        0.0
2019-01-26     0.0  3742.372934  3742.372934        0.0
2019-01-27     0.0  3736.506004  3736.506004        0.0
2019-01-28     0.0  3727.004020  3727.004020        0.0
2019-01-29     0.0  3717.387224  3717.387224        0.0
2019-01-30     0.0  3705.065211  3709.680371        0.0
…            …          …          …        …
2019-12-03     0.0  8139.303778  8350.672074        0.0
2019-12-04     0.0  8067.217824  8338.195445        0.0
2019-12-05     0.0  8001.267056  8324.681624        0.0
2019-12-06     0.0  7941.937231  8313.125919        0.0
2019-12-07     0.0  7898.880573  8295.507233        0.0
2019-12-08     0.0  7855.803442  8278.183023        0.0
2019-12-09     0.0  7798.747340  8262.575270        0.0
2019-12-10     0.0  7747.724289  8244.635676        0.0
2019-12-11     0.0  7692.612742  8225.930954        0.0
2019-12-12     0.0  7638.642797  8206.752102        0.0
2019-12-13     0.0  7589.042447  8190.893042        0.0
2019-12-14     0.0  7541.893538  8175.251465        0.0
2019-12-15     0.0  7493.670814  8159.122492        0.0
2019-12-16     0.0  7436.929603  8141.483051        0.0
2019-12-17     0.0  7379.385776  8118.634824        0.0
2019-12-18     0.0  7347.339473  8102.613207        0.0
2019-12-19     0.0  7318.911065  8084.971233        0.0
2019-12-20     0.0  7304.292666  8070.405356        0.0
2019-12-21     0.0  7300.657530  8064.922082        0.0
2019-12-22     0.0  7304.581391  8065.228863        0.0
2019-12-23     0.0  7315.192130  8043.109003        0.0
2019-12-24     0.0  7321.274835  8010.525266        0.0
2019-12-25     0.0  7323.232927  7971.939519        0.0
2019-12-26     0.0  7313.139935  7937.750000        0.0
2019-12-27     0.0  7307.173811  7901.519506        0.0
2019-12-28     0.0  7291.889211  7869.523975        0.0
2019-12-29     0.0  7286.821037  7839.406482        0.0
2019-12-30     0.0  7282.293541  7806.048696        0.0
2019-12-31     0.0  7277.866329  7769.928041        0.0
2020-01-01     0.0  7273.729391  7735.433461        0.0

[366 rows x 4 columns]
BTC-USD  holdings      cash     total  returns
Date
2019-01-01      0.0       0.0  100000.0  100000.0      NaN
2019-01-02      0.0       0.0  100000.0  100000.0      0.0
2019-01-03      0.0       0.0  100000.0  100000.0      0.0
2019-01-04      0.0       0.0  100000.0  100000.0      0.0
2019-01-05      0.0       0.0  100000.0  100000.0      0.0
BTC-USD  holdings           cash          total  returns
Date
2019-12-28      0.0       0.0  902908.959961  902908.959961      0.0
2019-12-29      0.0       0.0  902908.959961  902908.959961      0.0
2019-12-30      0.0       0.0  902908.959961  902908.959961      0.0
2019-12-31      0.0       0.0  902908.959961  902908.959961      0.0
2020-01-01      0.0       0.0  902908.959961  902908.959961      0.0
Portfolio Plot
Sharpe Ratio 2.2316038135122973
Compound Annual Growth Rate (CAGR) 0.8258754145331861
HODL Gains: 1.8258754145331861

 

Maximum Drawdown for the Period 1/1/2019-12/31/2019
Maximum Drawdown for the Period 1/1/2019-12/31/2019

2018

Followup, I was curious about the rough 2018 year for BTC. 2018 was quite bearish, with only a few small rallies up. It had awful buy and hold performance.

HODL Gains: 0.2565408133534354

Short AVG,Long AVG,Best Bank 1 4 498855.46875

Buy and hold would have been a 4x loss and DMAC would have traded in and out quite often, racking up some fees and slippage of course but, made a 5X gain.

2018 BTC Price with Average and Buy and Sell points Up Arrow = BUY, Down Arrow = SELL
2018 BTC Price with Average and Buy and Sell points Up Arrow = BUY, Down Arrow = SELL
 2019: Amount of cash made on paper
2018: Amount of cash made on paper
2018 Drawdown
2018 Drawdown

2018 Results Details on Github

Github repo

https://github.com/erickclasen/DMAC-BTC-Trading-Example-With-Tuning

trying out min-char-rnn and lstm

Text Generation

In early 2018, I started researching machine learning. I was curious about it and also looking for anything that could be useful in the space of machine learning to perform functions in code, specifically trading algorithms. I looked for code that would be easy to get started with, easy to pick apart and understand. I knew it would take time to understand and I was fine taking some sidetracks down some territory that would be interesting to play with.

I quickly came across various versions of text generation code. I won’t get into the theory here as there is a ton of information on it already and I have included my sources as links in the post.

Basically, this post will focus on the use of a well documented version char-rnn specifically min-char-rnn and an improved version that uses lstm.

This post is a brief dump on my toying around at text generation with machine learning. I may cover it further in more detail in the future.

To see some cutting edge examples of text generation, that is pretrained on a corpus using GPT-2, see the following post…

GPT-2 the Next Level in Text Generation for now at least

Currently I am trying out  RNN Text Generation using Tensorflow and plan on posting some results in the future…

https://github.com/spiglerg/RNN_Text_Generation_Tensorflow

 

char-rnn code

I started out with Andrej Karpathy’smin-char-rnn initially when I was in machine learning research mode early in 2018. During the early fall of 2018 I found posts by Eli Bendersky that gave a good break down of min-char-rnn, including code that had more comments and a few other pieces of code. The two other pieces of code was a Markov Model text generator  and lstm extension to min-char-rnn. In theory the lstm version of min-char-rnn should perform the best. I wound up with the best results (Python only code) using this version with modifications. The three modifications were the ability to save the output text while it was running and the ability to trim down the learning rate progressively while it was running and to specify a file name at the command line. I did this after I noticed that the loss was oscillating and not decreasing under certain conditions, playing with the coefficients for the code. Such as layers and the amount of lookback of the characters. Trying to decrease the learning rate can help with this behavior.

Beyond this code a more sophisticated way to do text generation is to go to using Lua Torch and running torch-rnn. This requires downloading and installing all of the packages in Linux required to run it.

Comparing the Versions

To start, min-char-rnn performance can be compared against the lstm version. In this example I am using the US Constitution as it is a document that is widely available and many people are familiar with. In the future I will cover running the text from this site through text generation code.

After 99000 loops of min-char-rnn on the US Constitution.


iter 99900, loss: 34.083777
 ----
  , Reach witle from to
 the Lice of shall degrest and unccive

athins or propinds, ploovate one shall eptemitlatiall un ligre shall have hake by the Ugiters shall no no be
 as
 writh two 2hs -quals and of

You can see it is trying to pick up on something and  a few of the words are actually legitimate English.

Now the lstm version created by Eli Bendersky, minimal-character-based-lstm-implementation

iter 99800 (p=18160), loss 9.844710
 ----
 shall receire ffour Houser, hred in overyof morty Cowcurthir
 such onf
 grate,
 shall then callary, and
 sittin dutler, shall, with an electors
 and Elections,
 which shall be a President.

3. Nuties, Impos

This version shows some hope as it is forming some more structure, better words and parts of sentences. The loss is lower as well.

What if the LTSM version runs longer, like a million cycles?

iter 999000 (p=17184), loss 4.694482
 iter 999200 (p=20384), loss 4.734232
 iter 999400 (p=23584), loss 4.815483
 iter 999600 (p=26784), loss 4.979268
 iter 999800 (p=29984), loss 5.165326
 ----
 shall consisted; but the Congress may by Congress, becommit,
 shall be a Senator or Representative intee any Department or Trust under the Laws Spojgiled to consirques hating been creary shall prioriti
 ----

It is getting a bit better, less broken words, formation of a sort of paragraph.

How about ten million cycles, where the loss seems to bottom out in the 2.5 range…

iter 10096200 (p=21776), loss 2.487097

iter 10096400 (p=24976), loss 2.517261

iter 10096600 (p=28176), loss 2.605424

iter 10096800 (p=31376), loss 2.556021

----

against the sements whereor who shall return in Consent whations whict: Amend

Mander State. a Treason of Disubility to lis arming the cume aftered thanney. Ir, or Conventions as the lise dusceptraray

I concatenated some more of the output….

to

post a Member in their borth intomie States, the Vice President and denuinned.

Amendment 10

The powers not the twerfth not betilizent of this article the Vicembagion and such Pentitias of the seve

s the United

Stated; under their Party; without the

United States.

Article 36. When mademe Court of the United States which shall not be retsion age State. Andain duty a stanly such Majority, or the

ited States or by any

State.

No Prefered, the

President proviit for President; necestald a majority be a Members, and the Legitlationen for the law of the President, to be the Eduld a Memberd

to ever

ne of the seber to

the Approparal Becomes of Blesident within the United States un nunis primas to which the District hensbill, such Presented by incohringle shall be

tax having

States, and

transmit t

Modifications of min-char-lstm.py

The following was created by modifying the code at https://github.com/eliben/deep-learning-samples/blob/master/min-char-rnn/min-char-lstm.py

The modifications are, 1 to be able to allow for a learning rate that starts higher and declines. I was experimenting with using the min-char-ltsm.py code on the contents of the blog posts on this site and I noticed that the learning rate would decline to a point and then oscillate up and down. By starting with a higher learning rate and then trimming it lower and lower, I was hoping to get the oscillations to settle and achieve a better loss rate. The other modification is that it will not print text to the screen until the loss declines by half. It also saves the text that is generated after the loss drops to a quarter of the original. Saving the text allows for optional post analysis, for instance of keywords.

Code for min-char-lstm-mod-2.py is pasted at the bottom of this post.

 

python min-char-lstm-mod-2.py us-constitution.txt
2019-02-21 21:58:36.456591: iter 13238027 (p=48), loss 5.571070
----
  f Hear Porty, or Vice President equal, as
Vares possary, having to nimpost to the President, dofe, for the Senate, the first Meeting of the first Summarma
onle Jonn admius lesments shall exercise they shall not be consthuled for tainamanment mort himal of the President shall consist of a sever Years as of the
United States;

To recass on the Congress may be admints, Cans; the
proprictions herein becom tamy and
Partarittent; or transting Surdsation shall
immentent no State, or abilitives shall b 
----

Saving to file

2019-02-21 21:58:36.568285: iter 13238028 (p=64), loss 5.567595
----
 anstwry by such Vacarciuse
Amdiavion, or other Sestected, and the Congress shall may leibtry days by Contict Biltian convice Vith.

No case, dupa such Penisdatizens prsed the Bildsent of thein be cindent for sitt in Cases of President and Vice President, or altice any Office of either House shall be held in the Bilas, except prohicies and Consuls; to be senict
compected
in this Conments of Congress.

The executive atther during the right of the United States, shall choole the Office of Rewofity  
----

Saving to file





char-rnn – Training Loss and Validation Loss

char-rnn – Training Loss and Validation Loss from MachineLearning

Lua Torch

I also experimented with torch-rnn which use Lua Torch. It works OK but nothing beyond what I see with the lstm version above. I only tried it breifly and haven’t formed any solid conclustions.

Lua Torch torch-rnn 2 layer 1024 dimensions

https://github.com/jcjohnson/torch-rnn

st keming the Immedince shall
have Power of Not
shall be not
lations for each
shall any
State by the Judigaany state not provided by Casssimate and Repund and Jurtice of the Sequicled in the Unanimed as excleding recrisal of its Consent of the
Mevole shall then the Vice-President, and of the President shall be make Raycesorveny or the right noveranded.

E thas Deleste, by Odfect to the nomes of any Qerfon and ciry of the State

becredo nugr on Condeling and firmine who
haviny this Constitution, but no derso- hivin, Regulation of Vice-Une Tneasor
this BitFzinst entseived.
he fect Coof Presidences, nhish shall be agSent of the Treaso shall gave to behave to the States nor and devermay of the United States; Monor subrected, nor and during the Leecther Year
d aftee the Adjupreit, but in a Memualif or public Ministersss atcerrind ad any Piffost the States connicted to Them thind
ponted by the United States.

S. Pemsud for the
chosen thes shall be a particied and Hays. Wh labth the narge of the Senate, law, ablone Indianty for a dwoun the Eves Motors of liozen Deardors and Elestions and ass ow the Legislatures
shall nake at semoun shall not be require the sunes as ivaly, age unters; and necons al
witn from Oate Members, and accuration of time of titimes, inlarconcancrading one properdyy of the United States, or in which meat
male, in property, sian to the Person having anm notine

Immortizer of having the President.

th onothert commors and Consent, shall apr in this Conviction
may shall, Ligizen aplice—
B.

Smation C
qulication, the first Manger To
theresimant of a pripersonr
Thithit subject dot chimles Tnemeriting the several States, shall be shall be equal States, or in any other Election,
compensation, without the several States; the receita diforme, but nother shall detmanation shall not excerain their Vecessary sexzect juty, or puflis indey
they shall be neach or number in mate been courtion or execuin s co-venty shall not be consugheraty of the Scatem at shall h

min-char-lstm-mod-2.py

# Minimal character-based language model learning with an LSTM architecture.
#
# Overall code structure based on Andrej Karpathy's min-char-rnn model:
#    https://gist.github.com/karpathy/d4dee566867f8291f086
#
# But the architecture is modified to be LSTM rather than vanilla RNN.
# The companion blog post is:
#   https://eli.thegreenplace.net/2018/minimal-character-based-lstm-implementation/
#
# Tested with Python 3.6
#
# Eli Bendersky [http://eli.thegreenplace.net]
# BSD License per original (@karpathy)
from __future__ import print_function

import numpy as np
import sys
import datetime


# Make it possible to provide input file as a command-line argument; input.txt
# is still the default.
if len(sys.argv) > 1:
    filename = sys.argv[1]
else:
    filename = 'input.txt'

with open(filename, 'r') as f:
    data = f.read()

# All unique characters / entities in the data set.
chars = list(set(data))
data_size = len(data)
V = vocab_size = len(chars)
print('data has %d characters, %d unique.' % (data_size, vocab_size))

# Each character in the vocabulary gets a unique integer index assigned, in the
# half-open interval [0:N). These indices are useful to create one-hot encoded
# vectors that represent characters in numerical computations.
char_to_ix = {ch:i for i, ch in enumerate(chars)}
ix_to_char = {i:ch for i, ch in enumerate(chars)}
print('char_to_ix', char_to_ix)
print('ix_to_char', ix_to_char)

# Hyperparameters.

# Size of hidden state vectors; applies to h and c.
H = hidden_size = 100
seq_length = 16 # number of steps to unroll the LSTM for
learning_rate = 0.1

# The input x is concatenated with state h, and the joined vector is used to
# feed into most blocks within the LSTM cell. The combined height of the column
# vector is HV.
HV = H + V

# Stop when processed this much data
MAX_DATA = 1000000

# Model parameters/weights -- these are shared among all steps. Weights
# initialized randomly; biases initialized to 0.
# Inputs are characters one-hot encoded in a vocab-sized vector.
# Dimensions: H = hidden_size, V = vocab_size, HV = hidden_size + vocab_size
Wf = np.random.randn(H, HV) * 0.01
bf = np.zeros((H, 1))
Wi = np.random.randn(H, HV) * 0.01
bi = np.zeros((H, 1))
Wcc = np.random.randn(H, HV) * 0.01
bcc = np.zeros((H, 1))
Wo = np.random.randn(H, HV) * 0.01
bo = np.zeros((H, 1))
Wy = np.random.randn(V, H) * 0.01
by = np.zeros((V, 1))


def sigmoid(z):
    """Computes sigmoid function.

    z: array of input values.

    Returns array of outputs, sigmoid(z).
    """
    # Note: this version of sigmoid tries to avoid overflows in the computation
    # of e^(-z), by using an alternative formulation when z is negative, to get
    # 0. e^z / (1+e^z) is equivalent to the definition of sigmoid, but we won't
    # get e^(-z) to overflow when z is very negative.
    # Since both the x and y arguments to np.where are evaluated by Python, we
    # may still get overflow warnings for large z elements; therefore we ignore
    # warnings during this computation.
    with np.errstate(over='ignore', invalid='ignore'):
        return np.where(z >= 0,
                        1 / (1 + np.exp(-z)),
                        np.exp(z) / (1 + np.exp(z)))


def lossFun(inputs, targets, hprev, cprev):
    """Runs forward and backward passes through the RNN.

      TODO: keep me updated!
      inputs, targets: Lists of integers. For some i, inputs[i] is the input
                       character (encoded as an index into the ix_to_char map)
                       and targets[i] is the corresponding next character in the
                       training data (similarly encoded).
      hprev: Hx1 array of initial hidden state
      cprev: Hx1 array of initial hidden state

      returns: loss, gradients on model parameters, and last hidden states
    """
    # Caches that keep values computed in the forward pass at each time step, to
    # be reused in the backward pass.
    xs, xhs, ys, ps, hs, cs, fgs, igs, ccs, ogs = (
            {}, {}, {}, {}, {}, {}, {}, {}, {}, {})

    # Initial incoming states.
    hs[-1] = np.copy(hprev)
    cs[-1] = np.copy(cprev)

    loss = 0
    # Forward pass
    for t in range(len(inputs)):
        # Input at time step t is xs[t]. Prepare a one-hot encoded vector of
        # shape (V, 1). inputs[t] is the index where the 1 goes.
        xs[t] = np.zeros((V, 1))
        xs[t][inputs[t]] = 1

        # hprev and xs[t] are column vector; stack them together into a "taller"
        # column vector - first the elements of x, then h.
        xhs[t] = np.vstack((xs[t], hs[t-1]))

        # Gates f, i and o.
        fgs[t] = sigmoid(np.dot(Wf, xhs[t]) + bf)
        igs[t] = sigmoid(np.dot(Wi, xhs[t]) + bi)
        ogs[t] = sigmoid(np.dot(Wo, xhs[t]) + bo)

        # Candidate cc.
        ccs[t] = np.tanh(np.dot(Wcc, xhs[t]) + bcc)

        # This step's h and c.
        cs[t] = fgs[t] * cs[t-1] + igs[t] * ccs[t]
        hs[t] = np.tanh(cs[t]) * ogs[t]

        # Softmax for output.
        ys[t] = np.dot(Wy, hs[t]) + by
        ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t]))

        # Cross-entropy loss.
        loss += -np.log(ps[t][targets[t], 0])

    # Initialize gradients of all weights/biases to 0.
    dWf = np.zeros_like(Wf)
    dbf = np.zeros_like(bf)
    dWi = np.zeros_like(Wi)
    dbi = np.zeros_like(bi)
    dWcc = np.zeros_like(Wcc)
    dbcc = np.zeros_like(bcc)
    dWo = np.zeros_like(Wo)
    dbo = np.zeros_like(bo)
    dWy = np.zeros_like(Wy)
    dby = np.zeros_like(by)

    # Incoming gradients for h and c; for backwards loop step these represent
    # dh[t] and dc[t]; we do truncated BPTT, so assume they are 0 initially.
    dhnext = np.zeros_like(hs[0])
    dcnext = np.zeros_like(cs[0])

    # The backwards pass iterates over the input sequence backwards.
    for t in reversed(range(len(inputs))):
        # Backprop through the gradients of loss and softmax.
        dy = np.copy(ps[t])
        dy[targets[t]] -= 1

        # Compute gradients for the Wy and by parameters.
        dWy += np.dot(dy, hs[t].T)
        dby += dy

        # Backprop through the fully-connected layer (Wy, by) to h. Also add up
        # the incoming gradient for h from the next cell.
        dh = np.dot(Wy.T, dy) + dhnext

        # Backprop through multiplication with output gate; here "dtanh" means
        # the gradient at the output of tanh.
        dctanh = ogs[t] * dh
        # Backprop through the tanh function; since cs[t] branches in two
        # directions we add dcnext too.
        dc = dctanh * (1 - np.tanh(cs[t]) ** 2) + dcnext

        # Backprop through multiplication with the tanh; here "dhogs" means
        # the gradient at the output of the sigmoid of the output gate. Then
        # backprop through the sigmoid itself (ogs[t] is the sigmoid output).
        dhogs = dh * np.tanh(cs[t])
        dho = dhogs * ogs[t] * (1 - ogs[t])

        # Compute gradients for the output gate parameters.
        dWo += np.dot(dho, xhs[t].T)
        dbo += dho

        # Backprop dho to the xh input.
        dxh_from_o = np.dot(Wo.T, dho)

        # Backprop through the forget gate: sigmoid and elementwise mul.
        dhf = cs[t-1] * dc * fgs[t] * (1 - fgs[t])
        dWf += np.dot(dhf, xhs[t].T)
        dbf += dhf
        dxh_from_f = np.dot(Wf.T, dhf)

        # Backprop through the input gate: sigmoid and elementwise mul.
        dhi = ccs[t] * dc * igs[t] * (1 - igs[t])
        dWi += np.dot(dhi, xhs[t].T)
        dbi += dhi
        dxh_from_i = np.dot(Wi.T, dhi)

        dhcc = igs[t] * dc * (1 - ccs[t] ** 2)
        dWcc += np.dot(dhcc, xhs[t].T)
        dbcc += dhcc
        dxh_from_cc = np.dot(Wcc.T, dhcc)

        # Combine all contributions to dxh, and extract the gradient for the
        # h part to propagate backwards as dhnext.
        dxh = dxh_from_o + dxh_from_f + dxh_from_i + dxh_from_cc
        dhnext = dxh[V:, :]

        # dcnext from dc and the forget gate.
        dcnext = fgs[t] * dc

    # Gradient clipping to the range [-5, 5].
    for dparam in [dWf, dbf, dWi, dbi, dWcc, dbcc, dWo, dbo, dWy, dby]:
        np.clip(dparam, -5, 5, out=dparam)

    return (loss, dWf, dbf, dWi, dbi, dWcc, dbcc, dWo, dbo, dWy, dby,
            hs[len(inputs)-1], cs[len(inputs)-1])


def sample(h, c, seed_ix, n):
    """Sample a sequence of integers from the model.

    Runs the LSTM in forward mode for n steps; seed_ix is the seed letter for
    the first time step, h and c are the memory state. Returns a sequence of
    letters produced by the model (indices).
    """
    x = np.zeros((V, 1))
    x[seed_ix] = 1
    ixes = []

    for t in range(n):
        # Run the forward pass only.
        xh = np.vstack((x, h))
        fg = sigmoid(np.dot(Wf, xh) + bf)
        ig = sigmoid(np.dot(Wi, xh) + bi)
        og = sigmoid(np.dot(Wo, xh) + bo)
        cc = np.tanh(np.dot(Wcc, xh) + bcc)
        c = fg * c + ig * cc
        h = np.tanh(c) * og
        y = np.dot(Wy, h) + by
        p = np.exp(y) / np.sum(np.exp(y))

        # Sample from the distribution produced by softmax.
        #ix = np.random.choice(range(V), p=p.ravel())
    # IX HACK
        ix = p.argmax()
    x = np.zeros((V, 1))
        x[ix] = 1
        ixes.append(ix)
    return ixes


def gradCheck(inputs, targets, hprev, cprev):
    global Wf, Wi, bf, bi, Wcc, bcc, Wo, bo, Wy, by
    num_checks, delta = 10, 1e-5
    (_, dWf, dbf, dWi, dbi, dWcc, dbcc, dWo, dbo, dWy, dby,
     _, _) = lossFun(inputs, targets, hprev, cprev)
    for param, dparam, name in zip(
            [Wf, bf, Wi, bi, Wcc, bcc, Wo, bo, Wy, by],
            [dWf, dbf, dWi, dbi, dWcc, dbcc, dWo, dbo, dWy, dby],
            ['Wf', 'bf', 'Wi', 'bi', 'Wcc', 'bcc', 'Wo', 'bo', 'Wy', 'by']):
        assert dparam.shape == param.shape
        print(name)
        for i in range(num_checks):
            ri = np.random.randint(0, param.size)
            old_val = param.flat[ri]
            param.flat[ri] = old_val + delta
            numloss0 = lossFun(inputs, targets, hprev, cprev)[0]
            param.flat[ri] = old_val - delta
            numloss1 = lossFun(inputs, targets, hprev, cprev)[0]
            param.flat[ri] = old_val # reset
            grad_analytic = dparam.flat[ri]
            grad_numerical = (numloss0 - numloss1) / (2 * delta)
            if grad_numerical + grad_analytic == 0:
                rel_error = 0
            else:
                rel_error = (abs(grad_analytic - grad_numerical) /
                             abs(grad_numerical + grad_analytic))
            print('%s, %s => %e' % (grad_numerical, grad_analytic, rel_error))


def basicGradCheck():
    inputs = [char_to_ix[ch] for ch in data[:seq_length]]
    targets = [char_to_ix[ch] for ch in data[1:seq_length+1]]
    hprev = np.random.randn(H, 1)
    cprev = np.random.randn(H, 1)
    gradCheck(inputs, targets, hprev, cprev)

# Uncomment this to run gradient checking instead of training
#basicGradCheck()
#sys.exit()

# n is the iteration counter; p is the input sequence pointer, at the beginning
# of each step it points at the sequence in the input that will be used for
# training this iteration.
n, p = 0, 0

# Memory variables for Adagrad.
mWf = np.zeros_like(Wf)
mbf = np.zeros_like(bf)
mWi = np.zeros_like(Wi)
mbi = np.zeros_like(bi)
mWcc = np.zeros_like(Wcc)
mbcc = np.zeros_like(bcc)
mWo = np.zeros_like(Wo)
mbo = np.zeros_like(bo)
mWy = np.zeros_like(Wy)
mby = np.zeros_like(by)
smooth_loss = -np.log(1.0/V) * seq_length
best_loss = smooth_loss
# Save the initial loss so that printing and saving occur at 1/2 of it and 1/10 of it.
start_loss = smooth_loss
output_filename = "lstm-output.txt"

print("\nStart Loss:",start_loss)



while p < MAX_DATA:
    # Prepare inputs (we're sweeping from left to right in steps seq_length long)
    if p+seq_length+1 >= len(data) or n == 0:
        # Reset RNN memory
        hprev = np.zeros((H, 1))
        cprev = np.zeros((H, 1))
        p = 0 # go from start of data

    # In each step we unroll the RNN for seq_length cells, and present it with
    # seq_length inputs and seq_length target outputs to learn.
    inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
    targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]

    # Sample from the model now and then.
#    if n % 1000 == 0:
#        sample_ix = sample(hprev, cprev, inputs[0], 200)
#        txt = ''.join(ix_to_char[ix] for ix in sample_ix)
#        print('----\n %s \n----' % (txt,))

    # Forward seq_length characters through the RNN and fetch gradient.
    (loss, dWf, dbf, dWi, dbi, dWcc, dbcc, dWo, dbo, dWy, dby,
     hprev, cprev) = lossFun(inputs, targets, hprev, cprev)
    smooth_loss = smooth_loss * 0.999 + loss * 0.001
#    if n % 200 == 0:
#        print('iter %d (p=%d), loss %f' % (n, p, smooth_loss))

    # Sample from the model now and then.
    if smooth_loss > (start_loss/4):
        if n % 1000 == 0:
            print('%s: iter %d (p=%d), loss %f' % (datetime.datetime.now(),n, p, smooth_loss))
    elif smooth_loss < best_loss:
        print('%s: iter %d (p=%d), loss %f' % (datetime.datetime.now(),n, p, smooth_loss))
    best_loss = smooth_loss
        sample_ix = sample(hprev, cprev, inputs[0], 500)
        txt = ''.join(ix_to_char[ix] for ix in sample_ix)
        print('----\n %s \n----' % (txt,))
    
    if smooth_loss < (start_loss/6):
        print("\nSaving to file\n")
            with open(output_filename,'a') as file_object:
                file_object.write(txt)
                    file_object.write("\n")


    # Perform parameter update with Adagrad.
    for param, dparam, mem in zip(
            [Wf, bf, Wi, bi, Wcc, bcc, Wo, bo, Wy, by],
            [dWf, dbf, dWi, dbi, dWcc, dbcc, dWo, dbo, dWy, dby],
            [mWf, mbf, mWi, mbi, mWcc, mbcc, mWo, mbo, mWy, mby]):
        mem += dparam * dparam
        param += -learning_rate * dparam / np.sqrt(mem + 1e-8)

    p += seq_length
    n += 1


GPT-2 the Next Level in Text Generation for now at least

If you love WordPress you are really going to like this post 🙂

I have played around with OpenAI’s GPT-2 a bit and it is a fun text generator. It does a strikingly good job at generating fairly OK text. With a little touch up it one could create something sensible and readable.

Seeded with “Welcome to WordPress!” (I’m editing in WordPress and this was the first thing that came to mind) below is what OpenAI’s GPT-2 machine learning text generator came up with on the first shot.  This has not been edited and GPT-2 seems to spell all words 100% and has fair grammar and sentence structure. Even though it can make fair paragraphs, it can wander a bit off topic at times but, is generally coherent. I have to admit being a bit blown away when I first started playing with it a bit and this is only using the 117M model version available on Github.

So, here’s the part where we start…

Welcome to WordPress!

Model prompt >>> Welcome to WordPress!
======================================== SAMPLE 1 ========================================
There are over 10,000 websites. It’s hard to count them all! And, it really is.

Let’s start with a quick summary of a few websites – WordPress, WordPress Plus, Hosted Blogger, WordPress, blog, etc.:

This is a list that lets you go from list to list and even from list to list. When you click on my name you can see I have 5 pages with 5,000+ blog posts.

It’s very easy to find and read a list on Twitter.

When you visit the Website, you are being asked to navigate through several pages of websites. You can also go to our Website navigation.

Now, let’s go for a little more fun: I hope that you have enjoyed this website and find the posts of other visitors that are looking to help you. It’s been one of those times, like now since the late ’80s, when I was doing it, that people would tell me I was dumb, that I didn’t know how to read a book. “I’d never read a book!” And that would be the beginning.

Now, I know when I read a book, I don’t look like a dumb person. No.

To be honest, I’m used to looking pretty stupid for being dumb. I don’t always have friends in their twenties or thirties that I would have found interesting. And I don’t take myself very seriously when things are looking good.

And I always have friends just as I am, and I don’t do good writing, and, in fact, when I’m not writing, I have friends I think are actually intelligent, but they are totally out of their heads. They are not stupid, so there’s just something about me that I would like to be called an intelligent genius. And they are just out of their brains.

I always go through my twenties and thirties thinking I’m smarter, and I’m going to give them crap and they’re probably right. I go through my twenties and thirties thinking I’m smarter and they’re just not going to.

I see all the people I’m reading these days thinking I’m smart and smart in my twenties and thirties, and I think, “My God? It’s only been a month since I have read a book.” I just think I’m getting the wrong look at it.

For Variety Let’s Go Try Again

This time around it output’s what looks like the intro to a tutorial on building a site, with some clunky HTML at the bottom. Notice the emoticon! I stole a few lines from this output and already wove them in above for fun. If one was to take the text below and hand edit it, it could be used to kick start a tutorial. It could break any writers block that you might have and get the ball rolling, see the example right after this text dump.

 

If you love WordPress you are really going to like this post 🙂

How Do I get started making WordPress?

In case you didn’t know WordPress is actually not quite that interesting after all. Even if you’re not trying to go the Web with a web browser.

But in order to succeed, WordPress needs you to be familiar with the various parts of it and if that’s all, you’re going to need some background info!

We’re actually all interested in WordPress on a lot of levels.

So, here’s the part where we start…

Creating a website

For WordPress, you’re going to be using the web for a couple of things.

The first thing you’ll want to do…

Create websites

So, you’ve probably read how the web can be an intimidating tool for beginners.

And so, we want to take you through what it really is.

Our site is a very basic web page.

So, how do we define a site and what do we do with it?

Well, it starts with making an appearance.

It’s called an email address

A website addresses are very powerful because it tells us the number of page views and the total number of views for an email.

Let’s start by getting you to get a feeling of how well this page works.

So what’s the deal with a website?

Let’s take a little detour

Now that you’ve got that impression about the website and get to your first point, we’ll start to explain more about how it functions.

How a website works?

Let’s first get you started with a simple HTML page .

We’ll start with defining our website at a basic level. Let’s go ahead, for the sake of simplicity we’ll take all of the components from the HTML below:

<a href=”https://www.wp.com/v3/wp/”>

<div class=”example-wrapper”>

</div>

<script type=”text/javascript”>

//

“example”

</script>

“http://the.example.com/?page=example.svg”

</a>

<page>

<img src=”https://www.wp.com/v3/wp/”>

</page>

</body>

OK lets make an intro to a WordPress tutorial out of some of the text above to see how it can help prime the pump of writing….

If you love WordPress you are really going to like this post 🙂

You might be wondering, how do I get started making a WordPress site? You’ve probably read how WordPress can be an intimidating tool for beginners. But, in order to succeed, you need to be familiar with the various parts of WordPress. You’re going to need some background info!

So what’s the deal with a website? Let’s start by getting  a feeling of how  building a page works by looking at how webpages work in the first place. We will take a little detour and get you started with a simple HTML page.

…it is entirely possible to get recursive and keep feeding the model text as you work along in the tutorial to create more ideas to propel the creative process forward. I might go into this further in the future and produce an example. But for I post a few cuts from model produced text below to show off the concept.

How do I get started with WordPress?

Once you have your free, easy to use WordPress site up and running for your WordPress account, you can find some tips and tricks to get started with the site you want to build your own plugin and set up in your WordPress application.

..I liked the line “How do I get started with WordPress?” so I fed that into the model and next thing you know I get something that could be a start to the next section of a tutorial…

Model prompt >>> How do I get started with WordPress?
======================================== SAMPLE 1 ========================================

Step 1 – Create your site

Once you have your site, get a copy of the plugin. I use WP Engine as my plugin manager. Once you are installed, you should see a page with WordPress: Setup – Configuration for your content page. After you have done that, click Create a new page for your newly created page. You should see the page with WordPress: Setup – Your site. Click Next to see the configuration section. In the next step, you will create the page as if your site was created with WordPress.

Step 2 – Update your site

Update your site in the WordPress dashboard. If it is installed from the local system of your WordPress installation, your WordPress will be updated. You can click Next to see updates for your site as they were made.

Step 3 – Set up your site

Set up your site by creating a WordPress account, signing in to the WordPress account, setting up your website with WordPress, etc.

Step 4 – Creating your website

Go to the Settings page and click Create Website. You can now create a new page under your WordPress profile. Select your new URL. In this section, you can check if this should work. If not, then you will need to click on the Advanced Button to make an update. Once you have checked if your site has been updated, click Continue. To finish, select Update Site and make a new change to your WordPress site.

Step 5 – Register

Once you have registered with WordPress, sign up for the My Online Account Program, which allows WordPress websites to be easily accessed from any web browser. Register your account with your web hosting provider (AWS) and you should see the instructions below.

Register a domain name

To begin, click on the Registration Check button. If it is still not there, then you will need to click on the “New Username” option or click on the Search bar that appears.

Click on the Search button in one of your applications. It will take you to the section on registration.

Select your domain name. Click OK to confirm that you are registered, it will pop in.

In both the Name and Contact fields below, click Add domain to Register and click the Registration Button, check Yes or OK to confirm that you should see a confirmation page.

Once you have checked in on your domain name, the web browser will ask you to enter your password. Next, follow these steps to change your existing

 

… almost the end for now.

Finally, How about a 3rd Try

Now the model goes way off topic. My guess is that there is random seeded of the Weights and Biases in the model and they must have been randomized in a way that caused the model to veer off topic right from the start.

<|endoftext|>A few weeks ago, I published an article detailing the first round of the 2016 American College Football Championship. The article was titled “Top 10 Players in Division I College Football,” and it’s still a long walk from having the best-of-the-class selection process of any college in college football. It’s an interesting concept, as it’s a way into the future. However, a lot of people may still think the same thing I’m saying, and I should think twice.

After seeing the rankings at my own site, I decided to share a few details on the 2017 football draft. The Top 10 prospects in Division I, along with a few other players, are included so that this post will highlight what makes the best players in college football.

The top 11 prospects in 2017

I’m making it sound as if our top class could be considered the top 5 in college football now. That’s not really so. I’m sure, we’ll be seeing much of the top 10 players in this Class of 2017 in 2016. The Top 10 players in Division I were ranked by the following: The number of career touchdowns and receiving yards the top 10 plays a season. If you are a member of the College Football Playoff, and you’re a member of Football Bowl Subdivision, I am going to rank you by the following. You are no slouch, but that is, if you rank first in the Big Ten or NFC Division, you might not have the Big Ten/CSN, unless that person, Jim Irsay, is looking at you.

If you are a member of the NCAA, and you’re going to be ranked second or third nationally in each conference, you might not have the highest ranking player in one conference, but maybe a couple of the higher rankings are worth your time and attention. These rankings are based off of two different ways of looking at players going into their careers as Division I college football prospects:

The Football Bowl Subdivision

My current ranking of teams is based on just two different ways:

• Based on how many points the top team is allowed to gain in division I.

• Based on how many points a team is allowed to lose in division I

This is also really well thought out and makes me wonder what the best option (or worse) for the college football players I know in the future may be.

I’m going to list my team’s results in alphabetical order, and I’d like to get

Resources

https://www.lmspulse.com/2019/open-source-artificial-neural-network-gpt/

https://lambdalabs.com/blog/run-openais-new-gpt-2-text-generator-code-with-your-gpu/

Siraj Ravel does a good job explain the technology behind OpenAi’s GPT-2 Text Generator

The Meijin's Retirement Game 52

AlphaGo: Machine Learning and the game Go

This post is basically a list of good resources on AlphaGo and the game Go. There are many fine tutorials out there on the Internet that  I have read through to understand more about machine learning and about how AlphaGo functions. I have collected what are in my opinion some of the best out there and published them in this post.

Additionally there is (as of June 2018) an open source version of AlphaGoZero (The zero means it started from zero, as in  there was no priming with data from human played games, it is programmed with  the rules to Go and just plays against itself repeatedly) called LeelaZero which was built by following the paper published by DeepMind that covers the research and development of AlphaGoZero. It is a formidable player indeed, I can’t even get one point against the monster. As an experiment, I played it against GnuGo to see how GnuGo would fair. It still gets beat by LeelaZero  at a slower rate than I do but, is able to score some points against LeelaZero on occasion.

GnuGo versus LeelaZero

  Alpha Go how and why it works

The post by Tim Wheeler is hands down one of the clearest explanations I have seen. Tim Wheeler not only does a great job with this post, he has many other quality posts  on his site http://tim.hibal.org

tim.hibal.org/blog/alpha-zero-how-and-why-it-works/

While you are looking at Tim’s post consider viewing the Alpha Go Cheatsheet as well, keep them both open and flip between them, a great way to learn.

hi res AlphaGo Cheatsheet

https://applied-data.science/static/main/res/alpha_go_zero_cheat_sheet.png

Other resources

One Diagram AlphaGoZero

https://medium.com/applied-data-science/alphago-zero-explained-in-one-diagram-365f5abf67e0

The Wikipedia article on Monte Carlo Tree Search is worth a skim if you are not already familiar with Monte Carlo Tree Search which is used in game playing code, both machine learning driven game algorithms and what I would call pre-machine learning types. Previous to machine learning it was successful mostly for games that have a lower branching factor such as Chess. It is also used in GnuGo in a mode that plays on a smaller than standard board (9×9 and smaller). It is probably limited to a small board by the branching factor which gets huge as the board size increases. The number of board configurations is 3^n^2, n being intersections. A 9×9 board has 10^38 and a 19×19 10^170 legal positions according to a Wikipedia article that I read.

Background

For a good background and a brief history of machine learning, deep reinforcement learning in particular. Well worth the read…

Andrej Karpathy Deep Reinforcement Learning: Pong from Pixels

Hands on Exercise

There is an article on Medium that is worth a read, Teach a machine to learn Connect4 strategy through self-play and deep learning
Plus it lets you follow along and build the code to get a nice hands on experience.

LeelaZero: Basically an open source Alpha Go Zero and  uses a JAVA GUI (Lizzie) to play it

LeelaZero, the Go engine  is an easy to download and compile program, at least on Linux where I had it up and running in about 5 minutes. It uses a companion interface Lizzie written in Java for it’s GUI. LeelaZero is interesting and fun to try out. The pondering mode is cool. You hit the space bar and it ponders and shows the next moves probabilities of winning and depths of search. You can see from the 1.7%, it is beating me pretty bad after 70 moves as it is almost certain to win.

LeelaZero PonderingHovering the mouse pointer over a specific move shows projections of the next moves for both players labeled with numbers up to it’s maximum forward game play estimates.

LeelaZero Future Moves

 

Released Code for LeelaZero

https://github.com/featurecat/lizzie/releases/tag/0.6

https://github.com/leela-zero/leela-zero

Main LeelaZero Page

zero.sjeng.org/

An interesting discussion on LeelaZero

https://lifein19x19.com/forum/viewtopic.php?f=18&t=15631

  GnuGo in EMACS

GnuGo Can be played within Emacs, which is handy. This is what I did when I played GNUGo against LeelaZero. I forced LeelaZero to play black so it went first and then mirrored it’s move into the GnuGo board and GnuGo’s move back to the Lizzle/LeelaZero board. When I looked at the projects code it looks like development stopped around 2009. At the time it was a fairly good computer Go game but since then others have outpaced it’s strength. From what I recall it is in the 900 elo range for strength.

If you have Emacs installed the GUI version and GnuGo installed, then with Emacs open pressing Alt-X and entering gnugo in the Emacs command buffer will open GnuGo within Emacs. The benefits of this is that you can use the mouse or up and down arrows to play instead of entering coordinates at the command line.

www.gnu.org/software/gnugo/gnugo_3.html#SEC27

General Go Resources

https://en.wikipedia.org/wiki/Book:Go:_The_Board_Game

https://en.m.wikipedia.org/wiki/Computer_Go

https://en.m.wikipedia.org/wiki/Rules_of_Go

https://en.m.wikipedia.org/wiki/John_Horton_Conway

Getting CGI and Perl scripts up and running on Minimal Ubuntu

I was trying, again to get this up and running. I have a piece of code notestack-example.cgi that uses Perl and the Perl Module for CGI. I had this working after fiddling with it the first time I flashed the SD card that I set up for a Pine 64 that was set up with minimal Ubuntu Xenial.

The problem is I wrote down only sketchy instructions and had to re-figure it out. After flashing another card, the first one had a slow moving corruption that would cause the machine to halt after a while, I got more clear with the process.

I have posted it here for myself, if I get stuck again and for anyone else who might need to know the process, if they get stuck. It is a rough outline. I copied what commands I issued from the history and added comments and some test code that I had on a RaspberryPi which has been running the notestack-example.cgi among other items for years so that was my baseline.

Outline getting CGI and Perl running for Apache

In this outline it is assumed that Apache2 is installed and configured.

Optional: To make it easier to get to the cgi directory from your home, create a symlink in the user home directory to be able to move into the cgi folder easier.

ln -s /usr/lib/cgi-bin cgi

Enable CGI support for Apache…
sudo a2enmod cgi

modify /etc/apache2/sites-enabled/000-default.conf

Putting in the code that enables cgi in the Apache config file for sites, this was take from the Raspberrypi and added into /etc/apache2/sites-enabled/000-default.conf
it was put right above the </VirtualHost> line.

——————————————————
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory “/usr/lib/cgi-bin”>
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>

——————————————————

Test with a simple BASH CGI script

Use the following code as printenv.cgi to test basic CGI with a bash script.
in the /usr/lib/cgi-bin directory.
——————————————————
#!/bin/bash -f
echo “Content-type: text/html”
echo “”
echo “<PRE>”
env || printenv
echo “</PRE>”
exit 0
——————————————————

test with…
which needs to be 755 (+x) permissions. This file is from the RPI, was used years ago to test it out.

sudo nano printenv.cgi
sudo chmod +x printenv.cgi
  /printenv.cgi
If all is well this will also be accessible from the web…
curl http://localhost/cgi-bin/printenv.cgi

curl is installed in the minimal build.

If you are using this on your own intranet OK, if this machine is accessible from the web get rid of printenv.cgi so the whole world can’t see the output if it gets found.

 

Test Perl

Test Perl scripts, normally Perl is installed even in the minimal Ubuntu build. It’s
presence can be tested for by using ‘which perl‘.
Use the following code as perl-test.cgi
in the /usr/lib/cgi-bin directory.
——————————————————
#!/usr/bin/perl
print “Content-type: text/html\n\n”;
print “Hello CGI\n”;
EOF
——————————————————

sudo nano perl-test.cgi
 sudo chmod 755 perl-test.cgi
 ./perl-test.cgi

Does it work via Apache?
  curl http://localhost/cgi-bin/perl-test.cgi

Perl CGI Module

Get Perl scripts (Any Perl code that uses CGI.pm in it) running and web accessible, ones that require the CGI Perl Module…

Got ERROR #1,missing CGI.pm, followed by #2 complained about a missing Util.pm, see below.

Got this error when the Perl Module was missing got a similar error after installing PM, complaint about missing CGI/Util.pm
ERROR 1 and 2
Can’t locate CGI.pm in @INC (you may need to install the CGI module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.22.1 /usr/local/share/$
BEGIN failed–compilation aborted at ./notestack-example.cgi line 36.
Grabbed the CGI.pm and Util.pm from my Raspberry Pi by searching for them….

sudo find / -name CGI.pm

sudo find / -name Util.pm

and copying using rcp to/tmp on the board I was setting up, a Pine 64 in this case.

rcp /usr/share/perl/5.14.2/CGI.pm ubuntu@192.168.1.31:/tmp
rcp /usr/share/perl/5.14.2/CGI/Util.pm ubuntu@192.168.1.31:/tmp

The code I was trying to run, the goal of all this work was a script named notestack-example.cgi.
sudo cp /tmp/CGI.pm /etc/perl
./notestack-example.cgi      <— Got ERROR #1
sudo cp /tmp/Util.pm /etc/perl/
./notestack-example.cgi      <— Got ERROR #2
sudo mkdir /etc/perl/CGI
sudo mv /etc/perl/Util.pm /etc/perl/CGI

Final error ERROR 3

Can’t use ‘defined(@array)’ (Maybe you should just omit the defined()?) at /etc/perl/CGI.pm line 528.
Compilation failed in require at ./notestack-example.cgi line 36.
BEGIN failed–compilation aborted at ./notestack-example.cgi line 36.

This one requires removing defined as this is old and not compatible with the current version of Perl.
Just removed the defined on line 528…

ubuntu@pine64:~$ diff /etc/perl/CGI.pm /tmp/CGI.pm
528c528
< if (@QUERY_PARAM && !$initializer) {

> if (defined(@QUERY_PARAM) && !defined($initializer)) {

Learned about this trick about removing the ‘defined‘ from…
https://github.com/shenlab-sinai/diffreps/issues/6
https://rt.cpan.org/Public/Bug/Display.html?id=79917

New York and Pennsylvania Heavy Rain and Flood Events July and August 2018

In July and August 2018 the area that I live went from quite dry to drenched in a matter of weeks. It was a close call for me, many folks in the surrounding areas were affected by the floods. A few people that  I spoke to stated that the flooding was worse in 2018 than the previous times of river flooding in 2011 and 2006. This flood event was mostly a small stream and creek flooding event and not a main stem river flood event. The flooding came on hard and fast as the rains seemed to train over the area repeatedly. The outlook for September still shows some above average precipitation but, thankfully we have had a tame Hurricane season so far.

NWS Summary of Rain Events for July and August 2018

The National Weather Service out of Binghamton did a nice summary of the rain events and posted it on their site.

https://www.weather.gov/bgm/pastFloodJulyAugust2018

Aerial View of the Flooding near where I live

http://heart-centered-living.org/wellsburg-flooding/