Home

Computer analyses in sports betting

This is perfectly normal on the stock market: the use of computers to generate profits. Wouldn’t it be great to use computers for sports betting as well? What would it be like if the PC were to carry out analyses? Imagine if everything went automatically and the profits were made while you were asleep or if software at least indicated the perfect time to bet money on this or that sporting event. Apart from the fact that this is practical, the time factor naturally also plays a role: while a person looks at what’s going on every now and then, and then reacts, the computer is always ready and reacts in fractions of a second.

Computer Analysis: Illegal? Morally reprehensible?

Computer Analyses

Computer analyses for sports betting or online football betting tips: Odds are of course calculated automatically. Nobody sits there and calculates again after each bet to calculate odds. What would happen if he miscalculated? Not to be imagined. Artificial intelligence has long since arrived in the field of sports betting. Anyone who decides to use computer programs to get help does not act illegally, because that is not prohibited. The moral side must be judged by everyone themselves. Apart from where such software comes from and what such a thing costs, it is likely to be used.

Machine remains machine

It would certainly be worth its weight in gold to have good software for computer analyses. What means good is a matter of interpretation. But a computer is only as good as it is programmed. It cannot process information such as current facts – the illness of a football player, for example. Only if the betting strategy is only based on odds would an automatism be helpful. On the other hand, nothing is so complicated and time-consuming that such an enormous amount of money would be worthwhile. This is proven by the following strategies, which pay particular attention to the amount of odds:

Strategies based above all on odds

For people who prefer to play it safe, the following strategy is suitable: If you place a 1.1 odds bet once a day, you have increased your budget by a factor of 2.59 after ten days. From 10 euro become after

10 days = 25,90 Euro
20 days = 67,30 Euro
30 days = 174,50 Euro
40 days = 452,60 Euro
50 days = 1173,90 Euro
60 days = 3044,80 Euro
70 days = 7897,50 Euro
80 days = 20484 Euro

In the event of a loss, you will need 10 won bets to recoup the money. In order to reduce the risk, you combine the system with a Surebets strategy and halve the money after 20 days. If you then lose, you only fall back about ten days and the money can continue to work, for example (this time with the strong capital of 1.00 euros):

10 days = 2,59 Euro
20 days = 6,73 Euro
=> 3,36 Euro secured and 3,37 Euro will be played on
30 days = 8,72 Euro
40 days = 22,63 Euro
11,31 Euro secured and 11,32 Euro will be played on
50 days = 29,35 Euro
60 days = 76,12 Euro
=> 38,06 Euro secured with 38,06 Euro will be played again
70 days = 98,72 Euro
80 days = 256,05 Euro
128,02 Euro secured with 128,03 Euro will be played on
90 days = 332,06 Euro
100 days = 861,29 Euro

The big advantage: If you are at 4000 Euro, 2000 Euro can be paid out as profit and the remaining 2000 Euro double again within 10 days. In addition, this strategy can be used in almost all sports.

Betting against late goals

This strategy is good if you are looking for bets with a high hit rate, good odds and the shortest possible betting period. With this system it is assumed that in certain matches no goal is scored in the last five minutes of a game. First you should look at the character of the game: This means KO games and games with a clear favourite are ignored.

Then you look at the course of the game until the 85th minute. The following possibilities or assumptions exist:

1. favourite leads with one goal – no further goal to be expected
2. favourite leads with two goals or more – no more goals to expect
3. outsider leads – further goal is likely
4. draw – assessment whether both teams could be satisfied with a draw (not for title aspirants or relegation candidates) – no further goal
5. at least one sending-off – further goal is probable

Then take a look at the season statistics of the participating teams: How was a goal scored in the last five minutes and how often did the goalkeeper of a team have to grab behind him in the last minutes? If all the conditions are met, the goal should be set in the 85th minute.

Timing matters

Good timing is essential for this strategy. It’s also handy to know football data so you don’t have to look it up first. The right site you can find here.

 

SQL::Composer – mapping SQL from Perl and back

SQL::Composer is yet another SQL mapper. But unlike others it does something very useful. It allows you to not only build an SQL from a Perl structure, but when getting the data from database to map it back to the usable Perl structure. We used that for this horse racing betting guide.

Some times you don’t need an ORM, but you would like to have an SQL builder that hides escaping, allows you to specify joins and returns a usable Perl structure when getting data from the database. SQL::Composer does exactly that.

For several years I have been waiting for a module that would do this, but unfortunately all the modules I have tried lack the join support at least. And I use joins a lot. Also they are tightly coupled with other ORMs or require a lot of manual parsing and stuff. So I have written my own module and several people found it handy. So here it goes.

Let’s start from the most advanced example where the best part of SQL::Composer is shown. For example we have Review -> Book -> Author tables. And we want to fetch a Review with all the related information.

my $expr = SQL::Composer::Select->new(
    from    => 'review',
    columns => ['text'],
    join    => [
        {
            source  => 'book',
            columns => ['title'],
            on      => [id => {-col => 'review.book_id'}],
            join    => [
                {
                    source  => 'author',
                    columns => ['name'],
                    on      => [id => {-col => 'book.author_id'}]
                }
            ]
        }
    ],
    where => [id => 1]
);

Let’s generate SQL:

my $sql = $expr->to_sql;

# SELECT
#     `review`.`text`,
#     `book`.`title`,
#     `author`.`name`
# FROM `review`
# JOIN `book` ON `book`.`id` = `review`.`book_id`
# JOIN `author` ON `author`.`id` = `book`.`author_id`
# WHERE `review`.`id` = ?

And get the bind values:

my @bind = $expr->to_bind;

After getting an ARRAYREF from DBI we will get a correctly mapped HASHREF that can either be used as is or can be converted into an object very easily (simple and tiny Perl value objects without additional functionality).

my $sth = $dbh->prepare($sql);
my $rv  = $sth->execute(@bind);

my $rows = $sth->fetchall_arrayref;
my $row_object = $select->from_rows($rows)->[0];

my $objects = $expr->from_rows($rows);

If $rows is something like:

[['Good', 'Perl programming', 'YAPH']]);

then we will get:

# [
#     {
#         'book' => {
#             'title'  => 'Perl programming',
#             'author' => {
#                 'name' => 'YAPH'
#             }
#         },
#         'text' => 'Good'
#     }
# ];

This structure is intuitive and maps perfecly to hash based Perl objects. If you do not want joins be embedded into other joins, just don’t specify them as that:

my $expr = SQL::Composer::Select->new(
    from    => 'review',
    columns => ['text'],
    join    => [
        {
            source  => 'book',
            columns => ['title'],
            on      => [id => {-col => 'review.book_id'}],
        },
        {
            source  => 'author',
            columns => ['name'],
            on      => [id => {-col => 'book.author_id'}]
        }
    ],
    where => [id => 1]
);

And we will get:

# [
#     {
#         'book' => {
#             'title' => 'Perl programming',
#         },
#         'author' => {
#             'name' => 'YAPH'
#         }
#         'text' => 'Good'
#     }
# ];

Among joins and data mapping SQL::Composer supports all expected expressions you would find in other SQL builders.

[foo => 'bar']              => "`foo` = ?",   ['bar']
[foo => { '!= ' => 'bar' }] => "`foo` != ?",  ['bar']
[foo => {-col => 'bar'}]    => "`foo` = bar", []

Also it supports sometimes hard to build sql expressions like:

[created => {'>' => \['ADD_DATE(NOW(), INTERVAL ? SECOND', 10]}]

# "created >= ADD_DATE(NOW(), INTERVAL ? SECOND)", [10]

I have been using SQL::Composer in ObjectDB for quite a while, and it works perfectly well in production with real world data and problems (of course if you like me try to reduce logic in SQL queries). The main focus is on joins and related objects, since when you have a “good enough” normalization you have a lot of joins. But as I stated earlier some people may use SQL::Composer directly since it is very handly and removes a lot of boilerplate from your Perl code.

One can say that writing raw SQL is more readable. Yes, that is true for very complex queries. But most of the time you need some kind of automation, arguments injections and so on and you end up implementing some kind of an SQL builder yourself. Also it is easier to map data from the database when it is structurely presented rather than parsing SQL which is not very easy and not very portable.

App::chronos – automatically record your computer activities

Often you want to know how much time you spend on a random computer activity during your work/home day. There are lots of apps that allow you to record the time, unfortunately you have to manualy turn them on and off. It can be really frustraiting when you forget to do so. So I have written an app that does that automatically.

chronos listens for X11 window switches and records how much time you have spent on every application. It runs a set of filters that guess the type of the application and its name. Moreover if the application can answer more than a question “what type am I” and additionally can provide like a visited URL, a contact you’re chatting with and so on, then the filter can parse that information and add to the log.

Activity details

As previously said the filters can parse additional information. For example right now if the application is a Firefox or a Chromium browser than the currently visited URL is detected. This is done by parsing current sessions. In case of a Skype or Pidgin, for example, the current contact name is detected.

Output

chronos prints the events to the stdout, so the log can be easily saved to any file you like. The format is simple: a single line in JSON, with UNIX epoch timestamps. For example:

{
   "_end" : 1412750698,
   "_start" : 1412750693,
   "application" : "Chromium",
   "category" : "browser",
   "class" : "\"Chromium\", \"Chromium\"",
   "command" : "",
   "id" : "0x4a00048",
   "name" : "\"reddit: the front page of the internet - Chromium\"",
   "role" : "\"browser\"",
   "url" : "www.reddit.com"
}

The JSON part has several fields. role, class, name and command are the fields recorded from X11 and they are saved as is. The filter program could for example detect what kind of a command line I am running (this time vim) and what kind of a file I am working on.

Reporting

Reading and analyzing the log file isn’t very handy, that is where the report command steps in.

As you already know the event is a JSON object that has various fields. The report tool can search through those events, group the results and sort the results by the time spent on them.

Show top 10 visited URLs:

$ chronos report --fields 'url' --where '$category eq "browser"' --group_by 'url' log_file | head -n 10

00d 00:18:27 url=www.youtube.com
00d 00:05:29 url=github.com
00d 00:01:59 url=twitter.com
00d 00:01:25 url=code.google.com

Here I am showing only url field, searching for category named browser and group by url.

Using --where and --group_by various useful reports can be produced specific to your needs.

--where syntax

If you have noticed option --where has a Perl-like syntax. That is actually eval-ed into a Perl subroutine that is than run on every event. This way the where clause can be as profound as needed.

--from and --to

Timeout

To configure how often you want chronos to sleep before recording any activity use --timeout option.

Idle time

chronos also detects the idle time and stops recording the activity. Idle time is detected by running xprintidle and comparing it to the --idle_timeout option, which is 5 minutes by default. So if you don’t type anything or don’t move your mouse for 5 minutes the previous activity is considered as ended.

Flushing

Various bad things can happen during recording. This could be the power outage or accidental killing of the chronos process. In order to be more robust chronos periodically flushes the activity to the log file. This can be configured by --flush_timeout option. And you won’t loose the event recording when you’ve been working on it for several hours.

Contributing

Different people use different applications. I cannot write the filters for every application out there, so if you use chronos and want an application and its options to be parsed, just write a filter package, it’s as simple as:

package App::Chronos::Application::Skype;

use strict;
use warnings;

use base 'App::Chronos::Application::Base';

sub run {
    my $self = shift;
    my ($info) = @_;

    # It's not a Skype application
    return
      unless $info->{role} =~ m/ConversationsWindow/
      && $info->{class} =~ m/Skype/
      && $info->{name} =~ m/Skype/;

    # Yay, it's Skype, let's parse the contact name
    $info->{application} = 'Skype';
    $info->{category} = 'im';
    ($info->{contact}) = $info->{name} =~ m/^"(?:\[\d+\])?(.*?) - Skype/;

    return 1;
}

1;

Tips & tricks

I personally have a bash script that combines several reports:

#!/bin/sh

LOG_FILE=$1
LIMIT=10
COMMAND="perl -Ilib script/chronos"

echo 'Top categories:'
$COMMAND report --fields 'category' --group_by 'category' $LOG_FILE
echo
echo "Top $LIMIT talks:"
$COMMAND report --fields 'contact' --where '$category eq "im"' --group_by 'contact' $LOG_FILE | head -n $LIMIT
echo
echo "Top $LIMIT URLs:"
$COMMAND report --fields 'url' --where '$category eq "browser"' --group_by 'url' $LOG_FILE | head -n $LIMIT
echo
echo 'Idle time:'
$COMMAND report --where '$idle' $LOG_FILE

And then:

$ ./report.sh log_file | mail -s Activities vti

Mixins in Perl

If you want to use mixins in Perl you don’t have to install anything or play with symbol table yourself. It’s right there, in the core.

Mixins are basically not creatable classes, their roles is to embed methods into your class. It is seen as an alternative to multiple inheritance and is something like Roles–.

To embed methods you can use plain old simple Exporter!

package MyMixin
use parent 'Exporter';

our @EXPORT_OK = qw(log);

sub log {
    my $self = shift;

    # Yes, this works too!
    $self->some_internal_method;

    say @_;
}

package SomeClassElsewhere;

use MyMixin 'log';

sub do_stuff {
    my $self = shift;

    $self->log('it is here!');
}

sub some_internal_method {
    my $self = shift;
}

KISSing is nice!

Publishr — publish everywhere

For http://pragmaticperl.com I needed a tool to post the new issue announcement to several social networks. It ended up supporting Facebook, Twitter, LiveJournal, VK, Email, IRC, Jabber/XMPP, Skype and more.

Where to get it?

http://github.com/vti/publishr.

How to run

From the git repository:

perl -Ilib script/publishr --config publishr.json message.txt

Where message.txt looks like:

Status: This is the short title
Link: http://link-to-the-press-release
Image: /path/to/image.jpg
Tags: perl, pragmaticperl, journal

The
multiline
body

Of course every social network supports different kind of messages. This is handled by the so called channels. For example for Twitter publishr only uses Status, Link and Image.

The publishr.json configuration files looks like:

{
   "access" : [
      {
         "name" : "twitter access #1",
         "options" : {
            "access_token" : "",
            "access_token_secret" : "",
            "consumer_key" : "",
            "consumer_secret" : ""
         },
         "type" : "twitter"
      }
   ],
   "scenarios" : [
      {
         "access" : "twitter access #1",
         "name" : "post to pragmaticperl twitter",
         "options" : {}
      }
   ]
}

access is a list of channel credentials. You name them as you like, provide required options and use in scenarios. This is made so you can use the same access tokens for different scenarios, like posting to different Facebook groups etc.

In scenarios you can provide options with additional options, like an IRC channel etc.

Custom commands

Sometimes you will need to just run a custom cli program. For example this is how sending to Skype is done. In util directory you can find a skype-chat.py Python script which uses Skyp4Py library. In order to call that script you configure cmd scenario:

{
    "name":"skype",
    "access":"cmd",
    "options":{
        "env":{
            "PYTHONPATH":"/path/to/skype4py/"
        },
        "cmd":"./util/skype-chat.py 'Skype Chat' '%status% %link%'"
    }
}

Where %status% and %link% are replaced by values from the message.txt.

Running only specific scenarios or channels

Sometimes you would want to run just a specific scenario or a channel, there are options for this:

perl -Ilib script/publishr --config publishr.json \
    --scenario 'post to twitter' message.txt
perl -Ilib script/publishr --config publishr.json \
    --channel 'facebook' message.txt
Loading...