Skip to content

Commit 2166036

Browse files
Merge pull request #11 from NiklasGustafsson/main
Updating tutorials.
2 parents 55cfea9 + 2f97ec2 commit 2166036

File tree

10 files changed

+152
-14
lines changed

10 files changed

+152
-14
lines changed

src/CSharp/CSharpExamples/CSharpExamples.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
</ItemGroup>
1818

1919
<ItemGroup>
20-
<PackageReference Include="TorchSharp-cpu" Version="0.95.1" />
20+
<PackageReference Include="TorchSharp-cpu" Version="0.95.3" />
2121
</ItemGroup>
2222

2323
<ItemGroup>

src/CSharp/Models/Models.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
</PropertyGroup>
66

77
<ItemGroup>
8-
<PackageReference Include="TorchSharp-cpu" Version="0.95.1" />
8+
<PackageReference Include="TorchSharp-cpu" Version="0.95.3" />
99
</ItemGroup>
1010

1111
</Project>

src/FSharp/FSharpExamples/FSharpExamples.fsproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
</ItemGroup>
2020

2121
<ItemGroup>
22-
<PackageReference Include="TorchSharp-cpu" Version="0.95.1" />
22+
<PackageReference Include="TorchSharp-cpu" Version="0.95.3" />
2323
</ItemGroup>
2424

2525
<ItemGroup>

src/Utils/Examples.Utils.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
<ItemGroup>
1212
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
1313
<PackageReference Include="SharpZipLib" Version="1.3.3" />
14-
<PackageReference Include="TorchSharp-cpu" Version="0.95.1" />
14+
<PackageReference Include="TorchSharp-cpu" Version="0.95.3" />
1515
</ItemGroup>
1616

1717
</Project>

tutorials/CSharp/tutorial1.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
"cell_type": "markdown",
3232
"metadata": {},
3333
"source": [
34-
"All the tutorial notebooks will rely on the CPU package, since that takes up the least amount of disk space and works everywhere. If you have access to a CUDA processor, replace the package name with the applicable Windows or Linux package from NuGet (TorchSharp-cuda-windows and TorchSharp-cuda-linux, respectively)."
34+
"All the tutorial notebooks (with the exception of the one that covers CUDA) will rely on the CPU package, since that takes up the least amount of disk space and works everywhere. If you have access to a CUDA processor, replace the package name with the applicable Windows or Linux package from NuGet (TorchSharp-cuda-windows and TorchSharp-cuda-linux, respectively)."
3535
]
3636
},
3737
{

tutorials/CSharp/tutorial2.ipynb

Lines changed: 70 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -409,7 +409,9 @@
409409
"cell_type": "markdown",
410410
"metadata": {},
411411
"source": [
412-
"There is no way to make `arange` produce anything but a 1D tensor, so a common thing to do is to reshape it as soon as it's created:"
412+
"## reshape()\n",
413+
"\n",
414+
"There is no way to make `arange` produce anything but a 1D tensor, so a common thing to do is to reshape it as soon as it's created."
413415
]
414416
},
415417
{
@@ -445,6 +447,73 @@
445447
"torch.arange(3.0f, 5.0f, step: 0.1f).reshape(4,5).str(\"0.00\")"
446448
]
447449
},
450+
{
451+
"cell_type": "markdown",
452+
"metadata": {},
453+
"source": [
454+
"`reshape()` is, of course, useful for many other things, too, not just shaping the result of `arange()`. One thing that is very useful to know is that you can pass in '-1' for __one__ of the dimensions, and it has a very special meaning. Let's look at an example:"
455+
]
456+
},
457+
{
458+
"cell_type": "code",
459+
"execution_count": null,
460+
"metadata": {
461+
"dotnet_interactive": {
462+
"language": "csharp"
463+
}
464+
},
465+
"outputs": [],
466+
"source": [
467+
"t = torch.rand(3,4,4,4);\n",
468+
"t.reshape(12, 4, 4).ToString()"
469+
]
470+
},
471+
{
472+
"cell_type": "code",
473+
"execution_count": null,
474+
"metadata": {
475+
"dotnet_interactive": {
476+
"language": "csharp"
477+
}
478+
},
479+
"outputs": [],
480+
"source": [
481+
"t.reshape(-1,4,4).ToString()"
482+
]
483+
},
484+
{
485+
"cell_type": "code",
486+
"execution_count": null,
487+
"metadata": {
488+
"dotnet_interactive": {
489+
"language": "csharp"
490+
}
491+
},
492+
"outputs": [],
493+
"source": [
494+
"t.reshape(4,-1,6).ToString()"
495+
]
496+
},
497+
{
498+
"cell_type": "markdown",
499+
"metadata": {},
500+
"source": [
501+
"As you can see, -1 is a wildcard. After the rest of the arguments specify their respective sizes, the -1 dimension is determined from the overall number of elements and the dimensions that have been specified. Obviously, it can only be used to construct a proper tensor if the other dimensions are correct. This, for example, results in an excpetion:"
502+
]
503+
},
504+
{
505+
"cell_type": "code",
506+
"execution_count": null,
507+
"metadata": {
508+
"dotnet_interactive": {
509+
"language": "csharp"
510+
}
511+
},
512+
"outputs": [],
513+
"source": [
514+
"t.reshape(4,-1,5).ToString()"
515+
]
516+
},
448517
{
449518
"cell_type": "markdown",
450519
"metadata": {},

tutorials/CSharp/tutorial5.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929
"\n",
3030
"This tutorial is the only one that does not use the 'TorchSharp-cpu' package. When you have a machine with a GPU that supports CUDA programming, you can use either 'TorchSharp-cuda-windows' or 'TorchSharp-cuda-linux' depending on your operating system. There is no CUDA distribution for MacOS. \n",
3131
"\n",
32-
"Usign CUDA, especially for training can boost the performance significantly, typically a couple of orders of magnitude. It may be the difference between model training being feasible and not.\n",
32+
"Using CUDA, especially for training, can boost the performance significantly, typically a couple of orders of magnitude. It may be the difference between model training being feasible and not.\n",
3333
"\n",
3434
"Note: The tutorials won't require much in terms of capabilities, but for training real vision models with even modest data sizes, you need at least 8MB of dedicated GPU memory. Even something as simple as CIFAR10 (in the Examples solution in this repo) requires that much memory in order not to blow up. 6MB, a common memory size on Nvidia-enabled laptops, is not enough.\n",
3535
"\n"

tutorials/FSharp/tutorial1.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
"cell_type": "markdown",
3232
"metadata": {},
3333
"source": [
34-
"All the tutorial notebooks will rely on the CPU package, since that takes up the least amount of disk space and works everywhere. If you have access to a CUDA processor, replace the package name with the applicable Windows or Linux package."
34+
"All the tutorial notebooks (with the exception of the one that covers CUDA) will rely on the CPU package, since that takes up the least amount of disk space and works everywhere. If you have access to a CUDA processor, replace the package name with the applicable Windows or Linux package."
3535
]
3636
},
3737
{

tutorials/FSharp/tutorial2.ipynb

Lines changed: 74 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -361,14 +361,16 @@
361361
},
362362
"outputs": [],
363363
"source": [
364-
"torch.arange(3.0f, 5.0f, step=0.1f)"
364+
"torch.arange(3.0f, 5.0f, step=0.1)"
365365
]
366366
},
367367
{
368368
"cell_type": "markdown",
369369
"metadata": {},
370370
"source": [
371-
"There is no way to make `arange` produce anything but a 1D tensor, so a common thing to do is to reshape it as soon as it's created:"
371+
"## reshape()\n",
372+
"\n",
373+
"There is no way to make `arange` produce anything but a 1D tensor, so a common thing to do is to reshape it as soon as it's created."
372374
]
373375
},
374376
{
@@ -381,7 +383,7 @@
381383
},
382384
"outputs": [],
383385
"source": [
384-
"torch.arange(3.0f, 5.0f, step=0.1f).reshape(4L,5L)"
386+
"torch.arange(3.0f, 5.0f, step=0.1f).reshape(4,5)"
385387
]
386388
},
387389
{
@@ -401,7 +403,74 @@
401403
},
402404
"outputs": [],
403405
"source": [
404-
"torch.arange(3.0f, 5.0f, step=0.1f).reshape(4L,5L).str(\"0.00\")"
406+
"torch.arange(3.0f, 5.0f, step=0.1f).reshape(4,5).str(\"0.00\")"
407+
]
408+
},
409+
{
410+
"cell_type": "markdown",
411+
"metadata": {},
412+
"source": [
413+
"`reshape()` is, of course, useful for many other things, too, not just shaping the result of `arange()`. One thing that is very useful to know is that you can pass in '-1' for __one__ of the dimensions, and it has a very special meaning. Let's look at an example:"
414+
]
415+
},
416+
{
417+
"cell_type": "code",
418+
"execution_count": null,
419+
"metadata": {
420+
"dotnet_interactive": {
421+
"language": "fsharp"
422+
}
423+
},
424+
"outputs": [],
425+
"source": [
426+
"let t = torch.rand(3,4,4,4);\n",
427+
"t.reshape(12, 4, 4).ToString()"
428+
]
429+
},
430+
{
431+
"cell_type": "code",
432+
"execution_count": null,
433+
"metadata": {
434+
"dotnet_interactive": {
435+
"language": "fsharp"
436+
}
437+
},
438+
"outputs": [],
439+
"source": [
440+
"t.reshape(-1,4,4).ToString()"
441+
]
442+
},
443+
{
444+
"cell_type": "code",
445+
"execution_count": null,
446+
"metadata": {
447+
"dotnet_interactive": {
448+
"language": "fsharp"
449+
}
450+
},
451+
"outputs": [],
452+
"source": [
453+
"t.reshape(4,-1,6).ToString()"
454+
]
455+
},
456+
{
457+
"cell_type": "markdown",
458+
"metadata": {},
459+
"source": [
460+
"As you can see, -1 is a wildcard. After the rest of the arguments specify their respective sizes, the -1 dimension is determined from the overall number of elements and the dimensions that have been specified. Obviously, it can only be used to construct a proper tensor if the other dimensions are correct. This, for example, results in an excpetion:"
461+
]
462+
},
463+
{
464+
"cell_type": "code",
465+
"execution_count": null,
466+
"metadata": {
467+
"dotnet_interactive": {
468+
"language": "fsharp"
469+
}
470+
},
471+
"outputs": [],
472+
"source": [
473+
"t.reshape(4,-1,5).ToString()"
405474
]
406475
},
407476
{
@@ -423,7 +492,7 @@
423492
},
424493
"outputs": [],
425494
"source": [
426-
"let t = torch.arange(3.0f, 5.0f, step=0.1f).reshape(2L,2L,5L);\n",
495+
"let t = torch.arange(3.0f, 5.0f, step=0.1f).reshape(2,2,5);\n",
427496
"\n",
428497
"// The overall shape of the tensor:\n",
429498
"t.shape"

tutorials/FSharp/tutorial5.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
"\n",
2929
"This tutorial is the only one that does not use the 'TorchSharp-cpu' package. When you have a machine with a GPU that supports CUDA programming, you can use either 'TorchSharp-cuda-windows' or 'TorchSharp-cuda-linux' depending on your operating system. There is no CUDA distribution for MacOS. \n",
3030
"\n",
31-
"Usign CUDA, especially for training can boost the performance significantly, typically a couple of orders of magnitude. It may be the difference between model training being feasible and not.\n",
31+
"Using CUDA, especially for training, can boost the performance significantly, typically a couple of orders of magnitude. It may be the difference between model training being feasible and not.\n",
3232
"\n",
3333
"Note: The tutorials won't require much in terms of capabilities, but for training real vision models with even modest data sizes, you need at least 8MB of dedicated GPU memory. Even something as simple as CIFAR10 (in the Examples solution in this repo) requires that much memory in order not to blow up. 6MB, a common memory size on Nvidia-enabled laptops, is not enough."
3434
]

0 commit comments

Comments
 (0)